This reminds of of that one time when I was on a date with a girl from the history department who somehow bemusedly sat through my entire mini-lecture on comparing infinite sets. Twenty years and three kids later, she'll still occasionally look me straight in the eye and declare "my infinity is bigger than your infinity."
Wow, I did a very similar thing on the first date with my now wife. I explained the halting problem, and Godel's incompleteness theorems. We also talked about her (biomedical) research, so it wasn't a one sided conversation.
I think dominating on a first date is a risk (which I was mindful of) but just being yourself, and talking about something you're truly passionate about is the key.
"So you see if the chance of pregnancy is constant per..uh..encounter, and given that the condom just broke, we're on a spectrum from the chance of a second round roughly doubling the odds but the overall chance is still small, or it doesn't make much difference anyway. Either way, the numbers say we should go again."
Way back then, calculus was a culture war battleground. Bishop Berkeley famously argued the foundations of calculus weren't any better that those of theology. This sort of thing motivated much work into shoring them up, getting rid of infinitesimals and the like (or, later, making infinitesimals rigorous in nonstandard analysis).
There’s something called “not giving a fuck” that works in those situations. The crux of it though is you need to “know thyself” or you’ll be forever your worst critic and enemy.
Also you're being open and readable to the other person. You're not being deceptive or putting on a show, which is usually what rustles people's jimmies.
Get this incel trolling out of here. No, all that would happen if your date weren’t interested in math/whatever you’re into is a polite message “I had a great time but I don’t think we have much in common” and leave it at that.
His oddly specific fear of being "plastered on a bunch of facebook new-york-dating-experience groups" sounds like women have had to warn each other about him before, and it probably wasn't about his interest in math, but something much worse.
I'm curious. Did either of you ever notice the implicit philosophical assumptions that you have to make to come to the conclusion that one infinity can be larger than another?
Despite the fact that this was actively debated for decades, modern math courses seldom acknowledge the fact that they are making unprovable intellectual leaps along the way.
> Despite the fact that this was actively debated for decades, modern math courses seldom acknowledge the fact that they are making unprovable intellectual leaps along the way.
That’s not at all true at the level where you are dealing with different infinities, usually, which tends to come after the (usually, fairly early) part dealing with proofs and the fact that all mathematics is dealing with “unprovable intellectual leaps” which are encoded into axioms, and everything in math which is provable is only provable based on a particular chosen set of axioms.
It may be true that math beyond that basic level doesn’t make a point of going back and explicitly reviewing that point, but it is just kind of implicit in everything later.
I guarantee that a naive presentation doesn't actually include the axioms, and doesn't address the philosophical questions dividing formalism from constructivism.
Uncountable need not mean more. It can mean that there are things that you can't figure out whether to count, because they are undecidable.
> I guarantee that a naive presentation doesn't actually include the axioms
But you said "modern math courses". Are you now talking about a casual conversation? I mean the OP's story is that his wife just liked listening to him talk about his passions.
> Uncountable need not mean more.
Sure. But that doesn't mean that there aren't differing categories. However you slice it, we can operate on these things in different ways. Real or not the logic isn't consistent between these things but they do fall out into differing categories.
If you're trying to find mistakes in the logic does it not make sense to push it at its bounds? Look at the Banach-Tarski Paradox. Sure, normal people hear about it and go "oh wow, cool." But when it was presented in my math course it was used as a discussion of why we might want to question the Axiom of Choice, but that removing it creates new concerns. Really the "paradox" was explored to push the bounds of the axiom of choice in the first place. They asked "can this axiom be abused?" And the answer is yes. Now the question is "does this matter, since infinity is non-physical? Or does it despite infinity being non-physics?"
You seem to think mathematicians, physicists, and scientists in general believe infinities are physical. As one of those people, I'm not sure why you think that. We don't. I mean math is a language. A language used because it is pedantic and precise. Much the same way we use programming languages. I'm not so sure why you're upset that people are trying to push the bounds of the language and find out what works and doesn't work. Or are you upset that non-professionals misunderstand the nuances of a field? Well... that's a whole other conversation, isn't it...
Your guesses at what I seem to think are completely off base and insulting.
When I say "modern math courses", I mean like the standard courses that most future mathematicians take on their way to various degrees. For all that we mumble ZFC, it is darned easy to get a PhD in mathematics without actually learning the axioms of ZFC. And without learning anything about the historical debates in the foundations of mathematics.
Honestly it's difficult to understand exactly what you're arguing. Because I understand laymen not understanding your argument about infinities not being real (and even many HN users don't understand code is math bit a CS degree doesn't take you far in math. Some calc and maybe lin alg) but are we concerned about laymen? I too am frustrated by nonexperts having strong opinions and having difficulties updating them, but that's not a culture problem. We're on HN and we know the CS stereotypes, right?
If instead you're talking about experts then I learned about what you're talking about in my Linear 2 course in a physics undergrad and have seen the topic appear many times since even outside my own reading of set theory. The axiom of choice seems to have even entered more main stream nerd knowledge. It's very hard to learn why AoC is a problem without learning about how infinities can be abused. But honestly I don't know any person that's even an amateur mathematician that thinks infinities are physical
The fact that you think I'm talking about the axiom of choice, demonstrates that you didn't understand what I'm talking about. I would also be willing to bet a reasonable sum of money that this topic did not come up in your Linear 2 course in physics undergrad.
The arguments between the different schools of philosophy in math are something that most professional mathematicians are unaware of. Those who know about them, generally learned them while learning about either the history of math, or the philosophy of math. I personally only became aware of them while reading https://www.amazon.com/Mathematical-Experience-Phillip-J-Dav.... I didn't learn more about the topic until I was in grad school, and that was from personal conversations. It was never covered in any course that I took on, either in undergraduate or graduate schools.
Now I'm curious. Was there anything that I said that should have been said more clearly? Or was it hard to understand because you were trying to fit what I said into what you know about an entirely unrelated debate about the axiom of choice?
> The fact that you think I'm talking about the axiom of choice, demonstrates that you didn't understand what I'm talking about.
Dude... just a minute ago you were complaining about ZFC... Sure, I brought up AoC but your time to protest was then.
The reason I brought up AoC is because it is a common way to learn about the abuse of infinity and where axioms need be discussed. Both things you brought up. I think you are reading further into this than I intended.
> Now I'm curious. Was there anything that I said that should have been said more clearly?
Is this a joke?
When someone says
>> Honestly it's difficult to understand exactly what you're arguing.
That's your chance to explain. It is someone explicitly saying... I'm trying to understand but you are not communicating efficiently.
This is even more frustrating as you keep pointing out that this is not common knowledge. So why are you also communicating like it is?! If it is something so few know about then be fucking clear. Don't make anyone guess. Don't link a book, use your own words and link a book if you want to suggest further reading, but not "this is the entire concept I'm talking about". Otherwise we just have to guess and you getting pissed off that we guess wrong is just down right your own fault.
So stop shooting yourself in the foot and blaming others. If people aren't understanding you, try assuming they can't read your mind and don't have the exact same knowledge you do. Talk about fundamental principles...
That point being that what we mean by "exists" is fundamentally a philosophical question. And our conclusions about what mathematical things exist will depend on how we answer that question. And very specifically, there are well-studied mathematical philosophies in which uncountable sets do not have larger cardinalities than countable ones.
If none of those explanations wind up being clear for you, then I'm going to need feedback from you to have a chance to explain this to you. Because you haven't told me enough for me to make any reasonable guess what the sticking point is between you and understanding. And without that, I have no chance of guessing what would clarify this for you.
The "philosophical questions" dividing formalism from constructivism are greatly overstated. The point of having those degrees of undecidability or uncountability is precisely to be able to say things like "even if you happen to be operating under strong additional assumptions that let you decide/count X, that still doesn't let you decide/count Y in general." That's what formalism is: a handy way of making statements about what you can't do constructively in the general case.
To be fair, constructivists tend to prefer talk about different "universes" as opposed to different "sizes" of sets, but that's all it is: little more than a mere difference in terminology! You can show equiconsistency statements across these different points of view.
Yes, you can show such equiconsistency statements. As Gödel proved, for any set of classical axioms, there is a corresponding set of intuitionistic axioms. And if the classical axioms are inconsistent, then so is the intuitionistic equivalent. (Given that intuitionistic reasoning is classically valid, an inconsistency in the intuitionistic axioms trivially gives you one in the classical axioms.)
So the care that intuitionists take does not lead to any improvement in consistency.
However the two approaches lead to very different notions of what it means for something to mathematically exist. Despite the formal correspondences, they lead to very different concepts of mathematics.
I'm firmly of the belief that constructivism leads to concepts of existence that better fit the lay public than formalism does.
Probably not. But this one time we had an argument and I made a statement along the lines of "I'm right, naturally." She went irrational. I lost the argument.
Here's a hint. When someone makes a reference to something that was actively debated for decades, and you're not familiar with said debates, you should probably assume that you're missing some piece of relevant knowledge.
What leaps are "unprovable"? I'm curious, that doesn't sound right.
For sure there are valid arguments on whether or not to use certain axioms which allow or disallow some set theoretical constructions, but given ZFC, is there anything that follows that is unprovable?
When you say "given ZFC", you're assuming a lot. Including a notion of mathematical existence which bears little relation to any concept that most lay people have of what mathematical existence might mean.
In particular, you have made sufficient assumptions to prove that almost all real numbers that exist can never be specified in any possible finite description. In what sense do they exist? You also wind up with weirder things. Such as well-specified finite problems that provably have a polynomial time algorithm to solve...but for which it is impossible to find or verify that algorithm, or put an upper bound on the constants in the algorithm. In what sense does that algorithm exist, and is finite?
Does that sound impossible? An example of an open problem whose algorithm may have those characteristics is an algorithm to decide which graphs can be drawn on a torus without any self-crossings.
If our notion of "exists" is "constructable", all possible mathematical things can fit inside of a countable universe. No set can have more than that.
> When you say "given ZFC", you're assuming a lot.
Errr, I'm just assuming the axioms of ZFC. That's literally all I'm doing.
> In what sense do [numbers that can't be finitely specified] exist?
In the sense that we can describe rules that lead to them, and describe how to work with them.
I understand that you're trying to tie the notion of "existence" to constructability, and that's fine. That's one way to play the game. Another is to use ZFC and be fine with "weird, unintuitive to laypeople" outcomes. Both are interesting and valid things to do IMO. I'm just not sure why one is obviously "better" or "more real" or something. At the end, it's all just coming up with rules and figuring out what comes out of them.
My point is that going from a lay understanding of mathematics to "just accept ZFC" means jumping past a variety of debatable philosophical points, and accepting a standard collection of answers to them. Mathematicians gloss over that.
On the other hand, I think it's really cool to teach laypeople about things like "sizes of infinities", etc. They are deep math concepts that can be taught with relatively simple analogies that most people understand, and they're interesting things to know. I know that I personally loved learning about them as a kid, before I had almost any knowledge of math - it's one of the reasons that while I initially didn't connect with other areas of math, I found set theory delightful as a kid.
I just feel like if you need to first walk people through a bunch of philosophical back and forth on constructionism, you'll never get to the fun stuff.
We each find different things delightful. What I like, you may not. And vice versa.
But it is easy to present deep ideas from constructivism, without mentioning the word constructivism. Or even acknowledging that the philosophy exists.
For example the second half of https://math.stackexchange.com/questions/5074503/can-pa-prov... is an important constructivist thing. It shows why everything that a constructivist could ever be interested in mathematically, can be embedded in the natural numbers. With all of the constructions needing nothing more than the Peano Axioms. (Proving the results may need stronger axioms though...)
From my point of view, https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach does something similar. That book got a lot of people interested in basic concepts around recursion, computation, and what it means to think. Absolutely everything in it works constructively. And yet that philosophy is not mentioned. Not even once.
The only point where a constructivist need discuss all of the philosophical back and forth on constructivism, is in explaining why a constructivist need not accept various claims coming out of classical mathematics. And even that discussion would not be so painful if people who have learned classical mathematics were more aware of the philosophical assumptions that they are making.
> We each find different things delightful. What I like, you may not. And vice versa.
To be honest, I don't feel like I know enough about the constructivist philosophy. What would be a good place to start if I want to learn more about it?
I haven't yet read your PA proving Goodstein sequences article, though I have skimmed it and it is, indeed, super interesting.
And for the record, Godel, Escher, Bach was probably the single most important influence on me even starting to get interested in computation, etc.
You are being very cryptic. Are you trying to say that the existence of uncountable sets requires the axiom of choice? If you are, that's false. If you aren't, I'm not sure what you are trying to say.
I'm definitely not trying to say that the existence of uncountable sets requires the axiom of choice. Cantor's diagonalization argument for the reals demonstrates otherwise.
I'm saying that to go from the uncountability of the reals to the idea that this implies that the infinity of the reals is larger, requires making some important philosophical assumptions. Constructivism demonstrates that uncountable need not mean more.
On the algorithm example, you could have asked what I was referring to.
The result that I was referencing follows from the https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour_theo.... The theorem says that any class of finite graphs which is closed under graph minors, must be completely characterized by a finite set of forbidden minors. Given that set of forbidden minors, we can construct a polynomial time test for membership in the class - just test each forbidden minor in turn.
The problem is that the theorem is nonconstructive. While it classically proves that the set exists, it provides no way to find it. Worse yet, it can be proven that in general there is no way to find or verify the minimal solution. Or even to provide an upper bound on the number of forbidden minors that will be required.
This need not hold in special cases. For example planar graphs are characterized by 2 forbidden minors.
For the toroidal graphs, as https://en.wikipedia.org/wiki/Toroidal_graph will verify, the list of known forbidden minors currently has 17,523 graphs. We have no idea how many more there will be. Nor do we have any reason to believe that it is possible to verify the complete list in ZFC. Therefore the polynomial time algorithm that Robinson-Seymour says must exist, does not seem to exist in any meaningful and useful way. Such as, for example, being findable or provably correct from ZFC.
> you're assuming a lot. Including a notion of mathematical existence which bears little relation to any concept that most lay people have of what mathematical existence might mean.
John Horton Conway:
> It's a funny thing that happens with mathematicians. What's the ontology of mathematical things? How do they exist? In what sense do they exist? There's no doubt that they do exist but you can't poke and prod them except by thinking about them. It's quite astonishing and I still don't understand it, having been a mathematician all my life. How can things be there without actually being there? There's no doubt that 2 is there or 3 or the square root of omega. They're very real things. I still don't know the sense in which mathematical objects exist, but they do. Of course, it's hard to say in what sense a cat is out there, too, but we know it is, very definitely. Cats have a stubborn reality but maybe numbers are stubborner still. You can't push a cat in a direction it doesn't want to go. You can't do it with a number either.
In the sense that all statements of non-constructive "existence" are made, viz. "you can't prove that they don't exist in the general case", so you are allowed to work under the stronger assumption that they also exist constructively, without any contradiction resulting. That can certainly be useful in some applications.
Sure, we can choose to work in a set of axioms that says that there exists an oracle that can solve the Halting problem.
But the fact that such systems don't create contradictions emphatically *DOES NOT* demonstrate the constructive existence of such an oracle. Doubly not given that in various usual constructivist systems, it is easily provable that nothing that exists can serve as such an oracle.
If such a system proved that the answer to some decidable question was x, when the actual answer was y, then the system would prove a contradiction. If the system doesn’t prove a contradiction, then that situation doesn’t happen, so you can trust its answers to decidable questions.
If the only questions you accept as meaningful are the decidable ones, then you can trust its answers for all the questions you accept as meaningful and for which it has answers.
Also, “provable that nothing that exists can serve as such an oracle” seems pretty presumptive about what things can exist? Shouldn’t that be more like, “nothing which can be given in such-and-such way (essentially, no computable procedure) can be such an oracle”?
Why treat it as axiomatic that nothing that isn’t Turing-computable can exist? It seems unlikely that any finite physical object can compute any deterministic non-Turing-computable function (because it seems like state spaces for bounded regions of space have bounded dimension), but that’s not something that should be a priori, I think.
I guess it wouldn’t really be verifiable if such a machine did exist, because we would have no way to confirm that it never errs? Ah, wait, no, maybe using the MIP* = RE result, maybe we could in principle use that to test it?
You're literally talking about how I should regard the hypothetical answers that might be produced by something that I think doesn't exist. There's a pretty clear case of putting the cart before the horse here.
On being presumptive about what things can exist, that's the whole point of constructivism. Things only exist when you can construct them.
We start with things that everyone accepts, like the natural numbers. We add to that all of the mathematical entities that can be constructed from those things. This provides us with a closed and countable universe of possible mathematical entities. We have a pretty clear notion of what it means for something in this universe to exist. We cannot be convinced of the existence of anything that is outside of the universe without making extra philosophical assumptions. Philosophical assumptions of exactly the kind that constructivists do not like.
This constructible universe includes a model of computation that fits Turing machines. But it does not contain the ability to describe or run any procedure that can't fit onto a Turing machine.
Therefore an oracle to decide the Halting problem does not exist within the constructible universe. And so your ability to imagine such an oracle, won't convince a constructivist to accept its existence.
You can think that something doesn't exist in the general case, while still allowing that it might exist in unspecified narrow cases where additional constraints could apply. For example, there might be algorithms that can decide the halting problem for some non-Turing complete class of programs. Being able to talk in full generality about how such special cases might work is the whole point of non-constructive reasoning. It's "non-constructive" in that it states "I'm not going to construct this just yet".
Well yes. We can certainly make a function that acts something like that oracle in some special cases. But my point was to give an example of something that cannot be constructively created. The oracle that I described cannot exist within the universe of constructable things.
> Therefore an oracle to decide the Halting problem does not exist within the constructible universe.
I might be confused here, but isn't an Oracle to decide the halting problem something that everyone agrees doesn't exist?
The whole idea is for this to be a thought experiment. "If we magically had a way to decide the halting problem, how would that affect things" seems like a normal hypothetical question.
You literally cannot doubt the existence of this oracle, without doubting what existence means in classical mathematics.
Here is why a classical mathematician would say that this oracle exists.
Let f(program, input, n) be 1 or 0 depending on whether the program program, given input input, is still running at step n. This is a perfectly well-behaved mathematical function. In fact it is a computable one - we can compute it by merely running a simulation of a computer for a fixed number of steps.
Let oracle(program, input) be the limit, as n goes to infinity, of f(program, input, n). Classically this limit always exists, and always gives us 0 or 1. The fact that we happen to be unable to compute it, doesn't change the fact that this is a perfectly well-defined function according to classical mathematics.
If you give up the existence of this oracle, you might as well give up the existence of any real numbers that do not have a finite description. Which is to say, almost all of them. Why? Because the set of finite descriptions is countable, and therefore the set of real numbers that admit a finite description is also only countable. But there are an uncountable number of real numbers, so almost all real numbers do not admit a finite description.
The real question isn't whether this oracle exists. It is what you want the word "exists" to mean.
If I'm following you, then most "mathematical" CS is based on constructivist foundations? E.g. while a halting problem Oracle might "exist" in the mathematical sense, it's not considered to "exist" for most purposes of deciding complexity classes, etc.
> The real question isn't whether this oracle exists. It is what you want the word "exists" to mean.
I was going to say the same thing. I'm not sure what "exists" means in some of these discussions.
It would be more accurate to say that most mathematical CS fits inside of constructivist foundations. Of course it also fits inside of classical foundations. So someone with constructivist inclinations may be drawn to that field. But participation in that field doesn't make you a constructivist.
As for what exists means, here are the three basic philosophies of mathematics.
The oldest is Platonism. It is the belief that mathematics is real, and we are trying to discover the right way to do it. Ours is not to understand how it is to exist, it is to try to figure out what actually exists. Kurt Gödel is a good example of someone who argued for this. See https://journals.openedition.org/philosophiascientiae/661 for a more detailed exploration of his views, and how they changed over time. (His Platonism does seem to have softened over time.)
Historically this philosophy is rooted in Plato's theory of Forms. Where our real world reflects an ideal world created by a divine Demiurge. With the rise of Christianity, that divine being is obviously God. This fit well with the common idea during the Scientific Revolution that the study of science and mathematics was an exploration of the mind of God.
Formalism dates back to David Hilbert. In Hilbert's own description, it reduces mathematics to formal symbol manipulation according to formal rules. It's a game to figure out what the consequences are of the axioms that were chosen. As for existence, "If the arbitrarily posited axioms together with all their consequences do not contradict each other, then they are true and the things defined by these axioms exist. For me, this is the criterion of truth and existence." See page 39 of https://philsci-archive.pitt.edu/17600/1/bde.pdf for a reference.
In other words if we make up any set of axioms and they don't contradict each other, the things that those axioms define have mathematical existence. Whether or not we can individually describe those things, or learn about them.
Over on the constructivist side of the fence, there are a wide range of possible views. But they share the idea that mathematical things can only exist when there is a way to construct them. But that begs the question.
Finitism only accepts the existence of finite things. In an extreme form, even the set of natural numbers doesn't exist. Only individual natural numbers. Goodstein of the Goodstein sequence is a good example of a finitist.
Intuitionism has the view that mathematics only exists in the minds of men. Anything not accessible to the minds of men, doesn't exist. The best known adherent of this philosophy is Brouwer.
My sympathies generally lie with the Russian school, founded by Markov. (Yes, the Markov that Markov chains are named after.) It roots mathematics in computability.
Erret Bishop is an example of a more pragmatic version of constructivism. Rather than focus on the philosophical claims, he pragmatically focuses on what can be demonstrated constructively. https://www.amazon.com/Foundations-Constructive-Analysis-Err... is his best known work.
Everyone agrees that you can't write an algorithm for a Turing machine (or computational equivalent) that decides the Halting problem for Turing machines in every case. Since this is explicitly worded as "you can't write an algorithm for..." it's in fact talking about a kind of constructive existence and saying that it doesn't apply. The oracle concept is normally phrased about "If we magically had a way to decide this undecidable problem in every case" but it's real utility from a constructive POV is talking about special cases that you haven't bothered to narrow down just yet.
This is exactly what I’m saying is presumptive! If constructivism is to earn the merit of being less presumptive by virtue of not assuming the existence of various things, it should also not assume the non-existence of those things.
Which, I think many visions of constructivism do earn this merit, but not your description of it.
The underlying problem is that constructivism and non-constructive reasoning are using the word "exists" (and, relatedly, the logical disjunction) to mean very different things. The constructive meaning for "exists" is certainly more intuitive, so it makes sense that constructivists would want it by 'default'; but the non-constructive operator (which a constructivist would preferably understand as "is merely allowed to exist"), while somewhat more subtle, has a usefulness of its own.
So having a different philosophy from you makes me presumptive?
What makes you presume that you have any business telling someone with different beliefs from you, what is OK to believe? You may believe in the existence of whatever you like. Whether that be numbers that cannot be specified, or invisible pink unicorns.
I'll be over in the corner saying that your belief does not compel me to agree with you on the question of what exists. Not when your belief follows from formalism, which explicitly abandons any pretense of meaningfulness to its abstract symbol manipulation.
No, that’s not what I said. Thinking you can determine a-priori that something that is logically self-consistent, cannot exist, if there is no reason that such a thing being physically instantiated would imply a logical contradiction, is the thing I think is presumptive.
Merely believing that such a thing (a halting oracle) doesn’t exist, isn’t something I meant to call presumptive, only believing that you can know a-priori (with certainty) that such things cannot exist.
I don’t claim that you are obligated to agree with me that they do exist. Someone who believes they don’t, but doesn’t believe they can know this as certain a-priori knowledge, would be no more presumptive than I am, and someone who is agnostic on the question of whether they exist would be less presumptive than I am.
Also, I disagree with your notion of “meaningfulness”. At a minimum, all statements in the arithmetic hierarchy are meaningful. The continuum hypothesis might in a certain sense not be meaningful.
> Merely believing that such a thing (a halting oracle) doesn’t exist, isn’t something I meant to call presumptive, only believing that you can know a-priori (with certainty) that such things cannot exist.
If you think that I was making that case, then you have misunderstood something important.
Constructivism is a statement about what kinds of arguments will convince me that things exist.
Could things exist that I don't believe in? Absolutely! There could well be a bank account with my name on it that I don't know about. Its existence is possible, and my lack of belief in it is no skin off of its back. But I still don't believe that it exists.
Similarly, the Platonists could be correct. There could be an omniscient God whose perfect mind gives existence to a perfect system of mathematics, beyond human comprehension. I have no way to prove that there isn't such a God, and therefore that there isn't such a perfect mathematics.
However the potential for such things to exist is a point of theology. I do not believe in their existence. Just as I do not believe in the existence of Santa. In neither case can I prove that they don't exist. And if you choose to believe in them, that's your business. Not mine.
There is nothing presumptive in my laying out the rules of reason that I will accept as convincing to me. There is a lot of presumption if anyone else comes along and tells me that I should think differently about unprovable propositions.
Now it happens to be the case that from the rules of reason that I use, I provably can't be convinced of the existence of certain things. That's a mathematical theorem. But the fact that I can't be convinced, doesn't prove that you shouldn't be convinced. You are free to be convinced of all of the unprovable assertions that you wish. And it is also true that on something like this, I have no way to convince you that it doesn't exist.
On meaningfulness, meaning is in the eye of the beholder. For example there are people who are willing to pay a million dollars for a century old stamp which was misprinted with the airplane upside-down. (See https://en.wikipedia.org/wiki/Inverted_Jenny to verify that.) They clearly find great meaning in that stamp. But I don't.
So again, you're free to find meaning in whatever you want. But you're in the wrong to object that I don't find meaning in what you consider important.
> emphatically DOES NOT demonstrate the constructive existence of such an oracle
Of course, but it shows that you can assume that such an oracle exists whenever you are working under additional conditions where the existence of such a "special case" oracle makes sense to you, even though you can't show its existence in the general case. This outlook generalizes to all non-constructive existence statements (and disjunctive statements, as appropriate). It's emphatically not the same as constructive existence, but it can nonetheless be useful.
That argument ought to convince you that there's a mere "possible world" where that bank account turns out to exist. Sometimes we are implicitly interested in these special-cased "possible worlds", even though they'll involve conditions that we aren't quite sure about. Non-constructive existence is nothing more than a handy way of talking about such things, compared to the constructively correct "it's not the case that the existence of X is always falsified".
It would be weird for a constructivist to be interested in a possible world that they don't believe exists.
Theoretically possible? Sure. But the kinds of questions that lead you there are generally in opposition to the kinds of principles that lead someone to prefer constructivism.
>Today, mathematics is regarded as an abstract science.
Pure mathematics is regarded as an abstract science, which it is by definition. Arnol'd argued vehemently and much more convincingly for the viewpoint that all mathematics is (and must be) linked to the natural sciences.
>On forums such as Stack Exchange, trained mathematicians may sneer at newcomers who ask for intuitive explanations of mathematical constructs.
Mathematicians use intuition routinely at all levels of investigation. This is captured for example by Tao's famous stages of rigour (https://terrytao.wordpress.com/career-advice/theres-more-to-...). Mathematicians require that their intuition is useful for mathematics: if intuition disagrees with rigour, the intuition must be discarded or modified so that it becomes a sharper, more useful razor. If intuition leads one to believe and pursue false mathematical statements, then it isn't (mathematical) intuition after all. Most beginners in mathematics do not have the knowledge to discern the difference (because mathematics is very subtle) and many experts lack the patience required to help navigate beginners through building (and appreciating the importance of) that intuition.
The next paragraph about how mathematics was closely coupled to reality for most of history and only recently with our understanding of infinite sets became too abstract is not really at all accurate of the history of mathematics. Euclid's Elements is 2300 years old and is presented in a completely abstract way.
The mainstream view in mathematics is that infinite sets, especially ones as pedestrian as the naturals or the reals, are not particularly weird after all. Once one develops the aforementioned mathematical intuition (that is, once one discards the naive, human-centric notion that our intuition about finite things should be the "correct" lens through which to understand infinite things, and instead allows our rigorous understanding of infinite sets to inform our intuition for what to expect) the confusion fades away like a mirage. That process occurs for all abstract parts of mathematics as one comes to appreciate them (expect, possibly, for things like spectral sequences).
> Euclid's Elements is 2300 years old and is presented in a completely abstract way.
depends on what you mean by completely abstract. Euclid relies in a logically essential way on the diagrams. Even the first theorem doesn't follow from the postulates as explicitly stated, but relies on the diagram for us to conclude that two circles sharing a radius intersect.
> Pure mathematics is regarded as an abstract science, which it is by definition.
I'd argue that, by definition, mathemtatics is not, and cannot be, a science. Mathematics deals with provable truths, science cannot prove truth and must deal falsifiability instead.
You could turn the argument around and say that math must be a science because it builds on falsifiable hypotheses and makes testable predictions.
In the end arguing about whether mathematics is a science or not makes no more sense than bickering about tomates being fruit; can be answered both yes and no using reasonable definitions.
> In the end arguing about whether mathematics is a science or not makes no more sense than bickering about tomates being fruit
That's the thing, though — It does make sense, and it's an important distinction. There is a reason why "mathematical certainty" is an idiom — we collectively understand that maths is in the business of irrefutable truths. I find that a large part of science skepticism comes from the fundamental misunderstanding that science is, like maths, in the business of irrefutable truths, when it is actually in the business of temporarily holding things as true until they're proven false. Because of this misunderstanding, skeptics assume that science being proven wrong is a deathblow to science itself instead of being an integral part of the process.
The practical experience of doing mathematics is actually quite close to a natural science, even if the subject is technically a "formal science* according to the conventional meanings of the terms.
Mathematicians actually do the same thing as scientists: hypothesis building by extensive investigation of examples. Looking for examples which catch the boundary of established knowledge and try to break existing assumptions, etc. The difference comes after that in the nature of the concluding argument. A scientist performs experiments to validate or refute the hypothesis, establishing scientific proof (a kind of conditional or statistical truth required only to hold up to certain conditions, those upon which the claim was tested). A mathematician finds and writes a proof or creates a counter example.
The failure of logical positivism and the rise of Popperian philosophy is obviously correct that we can't approach that end process in the natural sciences the way we do for maths, but the practical distinction between the subjects is not so clear.
This is all without mention the much tighter coupling between the two modes of investigation at the boundary between maths and science in subjects like theoretical physics. There the line blurs almost completely and a major tool used by genuine physicists is literally purusiing mathematical consistency in their theories. This has been used to tremendous success (GR, Yang-Mills, the weak force) and with some difficulties (string theory).
————
Einstein understood all this:
> If, then, it is true that the axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented, can we ever hope to find the right way? Nay, more, has this right way any existence outside our illusions? Can we hope to be guided safely by experience at all when there exist theories (such as classical mechanics) which to a large extent do justice to experience, without getting to the root of the matter? I answer without hesitation that there is, in my opinion, a right way, and that we are capable of finding it. Our experience hitherto justifies us in believing that nature is the realisation of the simplest conceivable mathematical ideas. I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed. - Albert Einstein
An alternative to abstraction is to use iconic forms and boundary math (containerization and void-based reasoning). See Laws of Form and William Bricken's books recently. Using a unary operator instead of binary (Boolean) does indeed seem simpler, in keeping with Nature. Introduction: https://www.frontiersin.org/journals/psychology/articles/10....
Mathematical "truth" all depends on what axioms you start with. So, in a sense, it doesn't prove "truth" either - just systemic consistency[1] given those starting axioms. Science at least grapples with observable phenomena in the universe.
Mathematical proofs are checked by noisy finite computational machines (humans). Even computer proofs' inputs-outputs are interpreted by humans. Your uncertainty in a theorem is lower bounded by the inherent error rate of human brains.
I agree we can't be absolutely certain of anything (maybe none of our memories are real and we just popped into existence etc.)
But we can be more sure of the deductive validity of a proof than we can be of any of the claims you make in these sentences, so I don't think they can serve to establish any doubt. If we're wrong about deductive logic, then we can only be more wrong about any empirical claims, which rely on deductive logic plus empirical observations
This may be, but not, I think, in a way that is particularly worth modeling?
When we try to model something probabilistically, it is usually not a great idea to model the probability that we made an error in our probability calculations as part of our calculations of the probability.
Ultimately, we must act. It does no good to suppose that “perhaps all of our beliefs are incoherent and we are utterly incapable of reason”.
Plenty of mathematical proofs have been proven true with 100% certainty. Complicated proofs that involve a lot of steps and checking can have errors. They can also be proven true if exhaustively checked.
You're saying maybe people have mistakenly accepted incorrect proofs now and again, so some theorems that people think are proven are unproven. I agree that this seems very likely.
In practice when proofs of research mathematics are checked, they go out to like 4 grad students. This isn't a very glamorous job for those grad students. If they agree then it's considered correct...
But note this is just the bleeding edge stuff. The basic stuff is checked and reproven by every math undergrad that learns math. Literally millions of people have checked all the proofs. As long as something is taught in university somewhere, all the people who are learning it (well, all the ones who do it well) are proving / checking the theory.
Anyway, when the scientific community accepts a bad proof what effectively happens is that we've just added an extra axiom.
Like when you deliberately add new axioms, there are 3 cases
- Axiom is redundant: it can be proven from the other axioms. (this is ... relatively fine? we tricked ourselves into believing something that is true is true, the reason is just bad.)
This can get discovered when people try to adapt the bad proof to prove other things and fail.
Also people find and publish and "more interesting", "different" proofs for old theorems all the time. Now you have redundancy.
- Axiom contradicts other axioms: We can now prove p and not p.
I wonder if this has ever happened? I.e. people proving contradictions, leading them to discover that a generally accepted theorem's proof is incorrect. It must have happened a few times in history, no?
o/c maybe the reason this hasn't happened is that the whole logical foundation of mathematics is new, dating back to the hilbert program (1920s).
There are well known instances of "proofs" being overturned before that, but they're not strictly logically proofs in the hilbert-program sense, just arguments. (Of course they contain most of the work and ideas that would go into a correct proof, and if you understand them you can do a modern proof)
Cauchys proof that, if a sequence of continuous functions converges [pointwise] to a function, the limit function is also continuous (cauchys proof only holds for uniform convergence, not pointwise convergence - but people didnt really know the difference at the time)
- Axiom is independent of other axioms: You can't prove or disprove the theorem.
English doesn't have a "I'm just hypothesizing all of this" voice, if it did exist this post should be in it. I didn't do enough research to answer your question. Some of the above may be wrong, e.g. the part about the 4 grad students.
One should probably look for historical examples.
Mathematics is a science of formal systems. Proofs are its experiments, axioms its assumptions. Both math and science test consistency—one internally, the other against nature. Different methods, same spirit of systematic inquiry.
It's not an empirical science, but it is a science, where "science" means any systematic body of knowledge of an aspect of a thing and its causes under a certain method. (In that sense, most of what are considered scientific fields are families of sciences.) Mathematics is what you'd call a formal science with formal structure and quantity as its object of study and deductive inference and analysis as its primary methods (the cause of greatest interest is the formal cause).
A proof is just an argument that something is true. Ideally, you've made an extremely strong argument, but it's still a human making a claim something is true. Plenty of published proofs have been shown to be false.
Math is scientific in the sense that you've proposed a hypothesis, and others can test it.
The difference is that in mathematics you only have to check the argument. In the empirical sciences you have to both check the argument and also test the conclusion against observations
Empirical science uses both deductive logic to make predictions, and observations to check those predictions. I'm not saying that's all it involves. Not sure which part of that you disagree with
And a lot of what goes on in foundations of mathematics could be described as "testing the axioms", i.e. identifying which theorems require which axioms, what are the consequences of removing, adding, or modifying axioms, etc.
Difference is mathematical arguments can be shown to be provably true when exhaustively checked (which is straight forward with simpler proofs). Something you don't get with the empirical sciences.
Also the empirical part means natural phenomena needs to be involved. Math can be purely abstract.
You're making a strong argument if you believe you checked every possibility, but it's still just an argument.
If you want to escape human fallibility, I'm afraid you're going to need divine intervention. Works checked as carefully as possible still seem to frequently feature corrections.
Somewhat tangential to the discussion: I have once read that Richard Feynman was opposed to the idea (originally due to Karl Popper) that falsifiability is central to physics, but I haven't read any explanation.
I'm not sure if it deals only with provable truths? It even deals with the concept of unprovability itself, if the incompleteness theorem is considered part of mathematics
Yes, but Godel proved the incompleteness theorem, by ingeniously finding ways to prove things about unprovability.
The incompleteness theorem doesn't say that there are statements which are unprovable in any absolute sense. What it says is that given a formal system, there will always be statements which that particular formal system can't prove. But in fact as part of the proof, Godel proves this statement, just not by deriving it in the formal system in question (obviously, since that's what he's proving is impossible).
The way this is done is by using a "metalanguage" to talk about the formal theory in question. In this case it's a kind of ambient set theory. Of course, the proof also implies that if this ambient metalanguage is formalized then there will be sentences which it can't prove either, but these in general will be different sentences for each formalized theory.
Science involves both deductive and inductive reasoning. I would in turn argue that mathematics is a science that focuses heavily (but not entirely) on deductive reasoning.
He probably means science in a wider sense as opposed to the anglo-american narrower sense where science is just physics, chemistry, biology and similar topics.
Who cares? That's just semantics. If we define science as the systematic search for truths, then mathematics and logic are the paradigmic sciences. If we define it as only empirical search for truth then perhaps that excludes mathematics, but it's an entirely unintersting point, since it says nothing.
Not only is intuition important (or the entire point; anyone with some basic training or even a computer can follow rules to do formal symbol manipulation. It's the intuition for what symbol manipulation to do when that's interesting), but it is literally discussed in a helpful, nonjudgmental way on Math Stack Exchange. e.g.
Other great sources for quick intuition checks are Wikipedia and now LLMs, but mainly through putting in the work to discover the nuances that exist or learning related topics to develop that wider context for yourself.
> The next paragraph about how mathematics was closely coupled to reality for most of history and only recently with our understanding of infinite sets became too abstract is not really at all accurate of the history of mathematics. Euclid's Elements is 2300 years old and is presented in a completely abstract way.
I may be off-base as an outsider to mathematics, but Euclid’s Elements, per my understanding, is very much grounded in the physical reality of the shapes and relationships he describes, if you were to physically construct them.
Quite the opposite, Plato, several hundred years before Euclid was already talking about geometry as abstract, and indeed the world of ideas and mathematics as being _more real_ than the physical world, and Euclid is very much in that tradition.
I am going to quote from the _very beginning_ of the elements:
Definition 1.
A point is that which has no part.
Definition 2.
A line is breadthless length.
Both of these two definitions are impossible to construct physically right off the bat.
All of the physically realized constructions of shapes were considered to basically be shadows of an idealized form of them.
Another point to keep in mind is that a lot of mathematics that's not considered abstract _now_ was definitely considered "hopelessly" abstract at the time of its conception.
The complex number system started being explored by the greeks long before any notion of the value of complex spaces existed, and could be mapped to something in reality.
I don't think we can say the Greeks were exploring complex numbers. There's something about Diophantus finding a way to combine two right-angled triangles to produce a third triangle whose hypotenuse is the product of the hypotenuses of the first two triangles. He finds an identity that's equivalent to complex multiplication, but this is because complex multiplication has a straighforward geometric interpretation in the plane that corresponds to this way of combining triangles.
There's a nice (brief) discussion in section 20.2 of Stillwell's Mathematics and its History
Plato was only about a generation before Euclid. Their lives might have even overlapped, or nearly so: Plato died in 347BC and Euclid's dates aren't known but the Elements is generally dated ~300BC
The only things that are weird in math are things that would not be expected after understanding the definitions. A lot of the early hurdles in mathematics are just learning and gaining comfort with the fact that the object under scrutiny is nothing more than what it's defined to be.
How has mathematics gotten so abstract? My understanding was that mathematics was abstract from the very beginning. Sure, you can say that two cows plus two more cows makes four cows, but that already is an abstraction - someone who has no knowledge of math might object that one cow is rarely exactly the same as another cow, so just assigning the value "1" to any cow you see is an oversimplification. Of course, simple examples such as this can be translated into intuitive concepts more easily, but they are still abstract.
It is abstract in the strict sense, of course. Every science is, as "abstract" simply means "not concrete". All reasoning is by definition abstract in the sense it all reasoning by definition involved concepts, and concepts are by definition abstract.
Numbers, for example, are abstract in the sense that you cannot find concrete numbers walking around or falling off trees or whatever. They're quantities abstracted from concrete particulars.
What the author is concerned with is how mathematics became so abstract.
You have abstractions that bear no apparent relation to concrete reality, at least not according to any direct correspondence. You have degrees of abstraction that generalize various fields of mathematics in a way that are increasingly far removed from concrete reality.
Mathematics arose from ancient humans need to count and measure. Even the invention\discovery of Calculus was in service to physics. It has probably only been 300 years or so since Mathematics has been symbolic, before that it was more geometric and more attached to the physical world.
Leibniz (late 1600s) helped to popularize negative numbers. At the time most mathematicians thought they were "absurd" and "fictitious".
Almost from the first time people started writing about mathematics, they were writing about it in an abstract way. The Egyptians and the Babylonians kept things relatively concrete and mostly stuck to word problems (although lists of pythagorean triples is evidence for very early "number theory"), but Greece, China and India were all working in abstractions relatively early.
Symbolic here refers of doing math with place holders, be it letters or something. Ancient world had notations for recording numbers. But much less so to do math with them. Say like long division.
> My understanding was that mathematics was abstract from the very beginning.
It wasn't; but that's a common misunderstanding from hundreds of centuries of common practice.
So, how has maths gotten so abstract? Easy, it has been taken over by abstraction astronauts(1), which have existed throghout all eras (and not just for software engineering).
Mathematics was created by unofficial engineers as a way to better accomplish useful activities (guessing the best time of year to start migrating, and later harvesting; counting what portion of harvest should be collected to fill the granaries for the whole winter; building temples for the Pharaoh that wouldn't collapse...)
But then, it was adopted by thinkers that enjoyed the activity by itself and started exploring it by sheer joy; math stopped representing "something that needed doing in an efficient way", and was considered "something to think about to the last consecuences".
Then it was merged into philosophy, with considerations about perfect regular solids, or things like the (misunderstood) metaphor of shadows in Plato's cave (which people interpreted as being about duality of the essences, when it was merely an allegory on clarity of thinking and explanation). Going from an intuitive physical reality such as natural numbers ("we have two cows", or "two fingers") to the current understanding of numbers as an abstract entity ("the universe has the essence of number 'two' floating beyond the orbit of Uranus"(2)) was a consequence of that historical process, when layers upon layers of abstraction took thinkers further and further away from the practical origins of math.
> That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?
Agree that math is built on language. But math is not any specific set of abstractions; time and again mathematicians have found out that if you change the definitions and axioms, you achieve a quite different set of abstractions (different numbers, geometries, infinity sets...). Does it mean that the previous math ceases to exist when you find a contradiction on it? No, it's just that you start talking about new objects, because you have gained new knowledge.
The math is not in the specific objects you find, it's in the process to find them. Rationalism consider on thinking one step at a time with rigor. Math is the language by which you explain rational thought in a very precise, unambiguous way. You can express many different thoughts, even inconsistent ones, with the same precise language of mathematics.
Agreed that we grew math to be that way. But there is an easy to trace history on the names of the numbers. Reals, Rationals, Imaginary, etc. They were largely named based on their relation to the language on how to relate them to physical things.
Proposed rule: People writing about the history of mathematics, should learn something about the history of mathematics.
Mathematicians didn't just randomly decide to go to abstraction and the foundations of mathematics. They were forced there by a series of crises where the mathematics that they knew fell apart. For example Joseph Fourier came up with a way to add up a bunch of well-behaved functions - sin and cos - and came up to something that wasn't considered a function - a square wave.
The focus on abstraction and axiomatization came after decades of trying to repair mathematics over and over again. Trying to retell the story in terms of the resulting mathematical flow of the ideas, completely mangles the actual flow of events.
I have to disagree with this. Modern (pure) mathematics is abstract and very often completely detached from practical applications because of culture and artistic inspiration. There is no "objectivity" driving modern pure mathematics. It exists mostly because people like thinking about it. Any connection to the real world is often a coincidence or someone outside the field noticing that something (really just a tiny-tiny amount) in pure maths could be useful.
> forced there by a series of crises where the mathematics that they knew fell apart
This can be said to be true of those working in foundations, but the vast majority of mathematicians are completely uninterested in that! In fact, most mathematicians today probably can't cite you the set-theoretic (or any other foundation) axioms that they use every day, if you ask them point-blank.
I think the title is a little tongue in cheek. The rest of the blog post develops the Foundations of arithmetic in a clear, well-grounded manner. This is probably a really good introduction for someone about to take a Foundations course. I say this having just Potter's "Set Theory and it's Philosophy" which covers the same material (and a lot more obviously) in 300 some pages.
Another good introduction is Frederic Schuller's YouTube lectures, though already there you can start to see the over abstraction.
My mental representation of this phenomenon is like inverted Russian dolls: you start by learning the inner layers, the basics, and as you mature, you work your way into more abstractions, more unified theories, more structures, adding layers as you learn more and more. Adding difficulty but this extreme refinement is also very beautiful. When studying mathematics I like to think of all these steps, all the people, and centuries of trial and errors, refinements it took to arrive where we are now.
The French Bourbaki school certainly had a large influence on increasing abstraction in math, with their rallying cry "Down With Triangles". The more fundamental reason is that generalizing a problem works; it distills the essence and allows machinery from other branches of math to help solve it.
"A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies."
This article explores a particular kind of abstractness in mathematics, especially the construction of numbers and the cardinalities of infinite sets. It is all very interesting indeed.
However, the kind of abstractness I most enjoy in mathematics is found in algebraic structures such as groups and rings, or even simpler structures like magmas and monoids. These structures avoid relying on specific types of numbers or elements, and instead focus on the relationships and operations themselves. For me, this reveals an even deeper beauty, i.e., different domains of mathematics, or even problems in computer science, can be unified under the same algebraic framework.
Consider, for example, the fact that the set of real numbers forms a vector space over the set of rationals. Can it get more abstract than that? We know such a vector space must have a basis, but what would that basis even look like? The existence of such a basis (Hamel basis) is guaranteed by the axioms and proofs, yet it defies explicit description. That, to me, is the most intriguing kind of abstractness!
Despite being so abstract, the same algebraic structures find concrete applications in computing, for example, in the form of coding theory. Concepts such as polynomial rings and cosets of subspaces over finite fields play an important role in error-correcting codes, without which modern data transmission and storage would not exist in their current form.
When I was learning me a Haskell I had a great time when I realised that as long as my type was a monoid I could freely chain the operations together purely because of associativity
The definition of bijection is much more interesting than comparing cardinals. Many everyday use cases where (structure-preserving) bijections make it clear that two apriori different objects can be treated similarly.
More generally, mathematics is experimental not just in the sense that it can be used to make physical predictions, but also (probably more importantly) in that definitions are "experiments" whose outcome is judged by their usefulness.
Sure even 500 years ago negative numbers were "absurd" in western mathematics and even in eastern mathematics where they were used they were more thought of as credits and debts than just abstract numbers.
Discussions of this sort can easily get chaotic, because people tend to conflate intuitiveness and concreteness. Sometimes the whole point of abstraction is to make a concept clearer and more intuitive. The distinction between polynomial function and polynomial is an example.
My hypothesis for this is the disconnect between mathematics and fields like physics and theoretical computer science.
We likely need new mathematics for making progress in physics or ..say.. have a better understanding of the PvsNP kind of problems, but very few high caliber mathematicians are motivated to do this.
Which makes sense, as it’s way easier and prestigious to define and solve your own abstract problems, publish one paper per grad student per year and coast through research life.
Can one do QFT in an ultrafinitistic foundations? My guess is no.
Also, I don’t think ZF sans the axiom of infinity works as an ultrafinitistic theory? It still has every natural number, just not the set of all of them.
I found it a bit ironic that the author introduced C code there as an aid, but didn't incorporate it into their argument. As I see it, code is exactly the bridge between abstract math and the empirical world - the process of writing code to implement your mathematical structure and then seeing if it gives you the output you expect (or better yet, with Lean, if it proves your proposition) essentially makes math a natural science again.
No, the correctness of your implementation is a mathematical statement about a computation running a particular computational environment, and can be reasoned about from first principles without ever invoking a computer. Whether your computation gives reasonable outputs on certain inputs says nothing (in general) about the original mathematics.
While mathematics "can" be reasoned about from first principles, the history of math is chock-full of examples of professional mathematicians convinced by unsound and wrong arguments. I prefer the clarity of performing math experiments and validating proofs on a computer.
Yes, but a C or Python program that “implements” a proof and which you test by running it on a few inputs is very different from a program in a interactive theorem prover like Rocq or Lean. In the latter, validity is essentially decided by type-checking, not execution
It is not a matter of what you think it is a logical fact, part of the definition if you will.
What you call concrete - were the origins of math as we know it. Geometry, astronomy, metaphysics etc they all had in common fundamental abstract thing that we call math today.
Saying “math got abstract” is like saying “a tree got wooden”. Because when it was a seed - it wasn’t yet a tree in a full sense.
Given the collective time put into it, easier stuff was already solved thousands of years ago, and people are not really left with something trivial to work on. Hence focusing on more and more abstract things as those are the only things left to do something novel.
two interesting cases: convex analysis and linear algebra are both relatively easy, concrete areas of mathematics. also beautiful and unbelievably useful. yet they didn't develop until the 19th century and didn't mature until the 20th.
Infinity is a convenience that pays off in terseness. There's constructive mathematics, but it's wordy and has lots of cases. You can escape undecidablity if you give up infinity. Most mathematicians consider that a bad trade.
None of that was even the abstract stuff. It is all models of sizes, order, and inclusion (integers, cardinals, ordinals, sets). Not the nastier abstractions of partial orders, associativity, composition and so on (lattices, categories, ...).
We used Peano arithmetic when doing C++ template metaprogramming anytime a for loop from 0..n was needed. It was fun and games as long as you didn't make a mistake because the compiler errors would be gnarly. The Haskell people still do stuff like this, and I wouldn't be surprised if someone were doing it in Scala's type system as well.
Also, the PLT people are using lattices and categories to formalize their work.
"Indeed, persistently trying to relate the foundations of math to reality has become the calling card of online cranks." <-- Hm??? I'm getting self-conscious. Details?
>Next, consider the time needed for Achilles to reach the yellow dot; once again, by the time he gets there, the turtle will have moved forward a tiny bit. This process can be continued indefinitely; the gap keeps getting smaller but never goes to zero, so we must conclude that Achilles can’t possibly win the race.
Am i daft, eventually (Very soon) Achilles would over take the turtles position regardless of how far it moved... I am missing something?
you're not, the proof is a famous error known as zenos paradox. Its only an apparent paradox, and indeed it's been disproven by observing that things do in fact move
I like the humourous way of putting it, but of course Zeno and his contemporaries knew that things moved - that's exactly why this seemed to be a paradox. Seemingly secure reasoning results in a conclusion that's obviously false.
To resolve the paradox, you have to show what's wrong with the reasoning, not just observe the obviously false conclusion.
Wow this is some serious over complication. How can anyone mix Philosophy and Mathematics? They are not even in the same ball park.. Even with infinity. Its just something that cant be understood in the mind, IMHO.
One could also say the opposite. It's not abstract at all, just a set of rules and their implications. Plausibly the least abstract thing there is.
On the other hand, two cookies plus three cookies, what even is a cookie? What if they're different sizes? Do sandwich cookies count as one or two? If you cut one in half, does you count it as two cookies now? All very abstract. Just give me some concrete definitions and rules and I'll give you a concrete answer.
I used to be a physicist and I love math for the toolbox it provides (mostly Analysis). It allows to solve a physical model and make predictions.
When I was studying, I always got top marks in Analysis.
Then came Algebra, Topology and similar nightmares. Oh crap, that was difficult. Not really because of the complexity, but rather because of abstraction, an abstraction I could not take to physics (I was not a very good physicist either). This is the moment I realized that I will never be "good in maths" and that will remain a toolbox to me.
Fast forward 30 years, my son has differentials in high school (France, math was one of his "majors").
He comes to me to ask what the fuck it is (we have a unhealthy fascination for maths in France, and teach them the same was as in 1950). It is only when we went from physical models to differentials that it became clear. We did again the trip Newton did - physics rocks :)
I feel like a great deal more credit should be given to Cauchy and his school, but I understand the tale is long enough.
The Peano axioms are pretty nifty though. To get a better appreciation of the difficulty of formally constructing the integers as we know them, I recommend trying the Numbers Game in Lean found here: https://adam.math.hhu.de/
I believe that abstraction is recursive in nature which creates multiple layers of abstract ideas leading to new areas or insights. For instance our understanding of continuity and limit led to calculus, which when tied to the (abstract) idea of linearity led to the idea of linear operator which explains various phenomena in the real world surprisingly well.
You could say that abstraction is a step or a ladder: by climbing on an abstraction you can see new goals and opportunities, possibly out of reach until you build yet new steps.
Not sure why oneness is privileged as what they have in common, and their oneness is meaningless by itself. Oneness is a property that is only meaningful in relation to other concepts of objects.
A rock is not physically a material object, it is a region of space where the electrons, protons and neutrons are differently arranged, and that region is fuzzy, difficult to determine; but as physical beings, as monkeys, we recognise its oneness, that's necessary for our survival in this physical world, we see this blurred outline of a rock, we feel it's weight in our hand, we observe its practical difference from two rocks. Just as we recognise twoness in a pair of rocks, fish, apples, threeness in a triple of parrots, of carrots, we abstract those out into 1, 2, 3, ...
I think this is a really good question, and the answer might be that ideally you move up and down the ladder of abstraction, learning from concrete examples in some domains, then abstracting across them, then learning from applying the abstractions, then abstracting across abstractions, then cycling through the process.
The tendency towards excessive abstraction is the same as the use of jargon in other fields: it just serves to gatekeep everything. The history of mathematics (and science) is actually full of amateurs, priests and bored aristocrats that happened to help make progress, often in their spare time.
Complaining about jargon is lazy. Most communications about complicated things are not aimed at the layman, because to do anything useful with the complicated things, you tend to have to understand a fair amount of the context of the field. Once you're committed to actually learning about the field, the jargon is the easiest part: they're just words or phrases that mean something very specific.
To put it another way: Jargon is the source code of the sciences. To an outsider, looking in on software development, they see the somewhat impenetrable wall of parentheses and semicolons and go "Ah, that's why programming is hard: you have to understand code". And I hope everyone here can understand that that's an uninformed thing to say. Syntax is the easy part of programming, it was made specifically to make expressing the rigorous problem solving easier. Jargon is the same way: it exists to make expressing very specific things that only people in this subfield actually think about easier, instead of having to vaguely gesture at the concept, or completely redefine it every time anybody wants to communicate within the field.
Abstraction isn't to gatekeep; it's to increase the utility. It's the same as "dependency inversion" in programming: do your logic in terms of interfaces/properties, not in terms of a particular instance. This makes reasoning reusable. It also often makes things clearer by cutting out distracting details that aren't related to the core idea.
People are aware that you need context to motivate abstractions. That's why we start with numbers and fractions and not ideals and localizations.
Jargon in any field is to communicate quickly with precision. Again the point is not to gatekeep. It's that e.g. doctors spend a lot of time talking to other doctors about complex medical topics, and need a high bandwidth way to discuss things that may require a lot of nuance. The gatekeeping is not about knowing the words; it's knowing all of the information that the words are condensing.
Formal reasoning is the point, which is not by itself abstraction.
Someone else in this discussion is saying Euclid's Elements is abstract, which is near complete nonsense. If that is abstract our perception of everything except for the fundamental [whatever] we are formed of is an abstraction.
I love how you lot just redefine words to suit your purpose:
https://www.etymonline.com/word/formal
"late 14c., "pertaining to form or arrangement;" also, in philosophy and theology, "pertaining to the form or essence of a thing," from Old French formal, formel "formal, constituent" (13c.) and directly from Latin formalis, from forma "a form, figure, shape" (see form (n.)). From early 15c. as "in due or proper form, according to recognized form," As a noun, c. 1600 (plural) "things that are formal;" as a short way to say formal dance, recorded by 1906 among U.S. college students."
There's not a much better description of what Euclid was doing.
What you mean is someone has redefined the word to suit their purpose, which is precisely what I pointed out at the top.
Edit to add: this comment had a sibling, that was suggesting that given a specific proof assistant requires all input to be formal logic perhaps the word formal could be redefined to mean that which is accepted by the proof assistant. Sadly this fine example of my point has been deleted.
Every mathematician understands what a formal proof is. Ditto a formal statement of a mathematical or logical proposition. The mathematicians of 100 years ago also all understood, and the meaning hasn't changed over the 100 years.
> The mathematicians of 100 years ago also all understood, and the meaning hasn't changed over the 100 years.
Isn't that the subject of the whole argument? That mathematicians have taken the road off in a very specific direction, and everyone disagreeing is ejected from the field, rather like occurred more recently in theoretical physics with string theory.
Prior to that time quite clearly you had formal proofs which do not meet the symbolic abstraction requirements that pure mathematicians apparently believe are axiomatic to their field today, even if they attempt to pretend otherwise, as argued over the case of Euclid elsewhere. If the Pythagoreans were reincarnated, as they probably expected, they would no doubt be dismissed as crackpots by these same people.
Not all proofs are formal, and most published papers are not formal in the strictest sense. That is why they talk about "formalizing" a proof if there is some question about it. It is that formalization process which often finds flaws.
No, abstraction is the point and formal reasoning is a tool. And yes, what Euclid did is obviously abstraction, I don’t know why so you consider this stance nonsense.
Can you say how mathematics is inherently abstract in a way consistent with your day-to-day life as a concrete person? Or is your personhood also an abstraction?
I could construct a formal reasoning scheme involving rules and jugs on my table, where we can pour liquids from one to another. It would be in no way symbolic, since it could use the liquids directly to simply be what they are. Is constructing and studing such a mechanism not mathematics? Similarly with something like musical intervals.
Of course I can. I frequently use numbers which are great abstraction. I can use same number five to describe apples, bananas and everything countable.
> to describe apples, bananas and everything countable
An apple is an abstraction over the particles/waves that comprise it, as is a banana.
Euclid is no more abstract than the day to day existence of a normal person, hence to claim that it is unusually abstract is to ignore, as you did, the abstraction inherent in day to day life.
As I pointed out it's very possible to create formal reasoning systems which are not symbolic or abstract, but due to that are we to assume constructing or studying them would not be a mathematical exercise? In fact the Pythagoreans did all sorts of stuff like that.
> An apple is an abstraction over the particles/waves that comprise it, as is a banana.
No, you don’t understand what abstraction is. Apple is exactly arrangement of particles, it’s not abstraction over them.
> hence to claim that it is unusually abstract
Who talks about him being unusually abstract (and not just abstract)?
> is to ignore, as you did, the abstraction inherent in day to day life.
How am I ignoring this abstraction when I’ve provided you exactly that (numbers are abstraction inherent in day to day life).
I’m sorry but you seem to be discussing in bad faith.
> Apple is exactly arrangement of particles, it’s not abstraction over them.
No. You can do things to that apple, such as bite it, and it is still an apple, despite it now having a different set of particles. It is the abstract concept of appleness (which we define . . . somehow) applied to that arrangement of particles.
> I’m sorry but you seem to be discussing in bad faith.
I believe mathematics was much tamer before Georg Cantor's work. If I had to pick a specific point in history when maths got "so abstract", it would be the introduction of axiomatic set theory by Zermelo.
I personally cannot wrap my head around Cantor's infinitary ideas, but I'm sure it makes perfect sense to people with better mathematical intuition than me.
I'm curious how you managed to find nothing on lcamtuf. He's one of the most famous Polish hackers from the 90s, then one the best security researchers Google had. Even if you live under a rock, the substack has an "about" section.
If it wasn't for Michał I'd probably be a farmer today.
Did you bother to google his handle? While I don't know his pure mathematics credentials, he's nerd-famous enough to not warrant an introduction. In fact, you not recognizing it says something about you.
To be fair, we are on hacker news. I did once use on of his programs, American Fuzzy Lopper (fake advertisement lawsuit incoming if its not american). So he is not nobody apparently
This reminds of of that one time when I was on a date with a girl from the history department who somehow bemusedly sat through my entire mini-lecture on comparing infinite sets. Twenty years and three kids later, she'll still occasionally look me straight in the eye and declare "my infinity is bigger than your infinity."
Wow, I did a very similar thing on the first date with my now wife. I explained the halting problem, and Godel's incompleteness theorems. We also talked about her (biomedical) research, so it wasn't a one sided conversation.
I think dominating on a first date is a risk (which I was mindful of) but just being yourself, and talking about something you're truly passionate about is the key.
This is the type of romcom I'd watch ;)
Once I taught the binomial coefficient formula to a girl after sex
"So you see if the chance of pregnancy is constant per..uh..encounter, and given that the condom just broke, we're on a spectrum from the chance of a second round roughly doubling the odds but the overall chance is still small, or it doesn't make much difference anyway. Either way, the numbers say we should go again."
that's not too abstract, I can see how this formula applies to sex
I tried using if for this: https://adventofcode.com/2023/day/12 but computer said no
The Fibonacci sequence might have been more appropriate.
Not if they were using contraception.
[dead]
Fittingly this is roughly the same vintage as your relationship then: https://youtu.be/BipvGD-LCjU
I taught my wife simplex algorithm for linear programming and she forgot all of it
Turns out I’m neither good in maths nor teaching
This is the sweetest thing ever and I hope you feel those butterflies even now sharing this story.
Way back then, calculus was a culture war battleground. Bishop Berkeley famously argued the foundations of calculus weren't any better that those of theology. This sort of thing motivated much work into shoring them up, getting rid of infinitesimals and the like (or, later, making infinitesimals rigorous in nonstandard analysis).
https://en.wikipedia.org/wiki/The_Analyst
[flagged]
You would not. People love hearing about the things you care about as long as you can present them in interesting ways. Try it!
There’s something called “not giving a fuck” that works in those situations. The crux of it though is you need to “know thyself” or you’ll be forever your worst critic and enemy.
Also you're being open and readable to the other person. You're not being deceptive or putting on a show, which is usually what rustles people's jimmies.
Get this incel trolling out of here. No, all that would happen if your date weren’t interested in math/whatever you’re into is a polite message “I had a great time but I don’t think we have much in common” and leave it at that.
His oddly specific fear of being "plastered on a bunch of facebook new-york-dating-experience groups" sounds like women have had to warn each other about him before, and it probably wasn't about his interest in math, but something much worse.
I wouldn't go on Hinge if that's the default experience.
It is not, of course.
I'm curious. Did either of you ever notice the implicit philosophical assumptions that you have to make to come to the conclusion that one infinity can be larger than another?
Despite the fact that this was actively debated for decades, modern math courses seldom acknowledge the fact that they are making unprovable intellectual leaps along the way.
> Despite the fact that this was actively debated for decades, modern math courses seldom acknowledge the fact that they are making unprovable intellectual leaps along the way.
That’s not at all true at the level where you are dealing with different infinities, usually, which tends to come after the (usually, fairly early) part dealing with proofs and the fact that all mathematics is dealing with “unprovable intellectual leaps” which are encoded into axioms, and everything in math which is provable is only provable based on a particular chosen set of axioms.
It may be true that math beyond that basic level doesn’t make a point of going back and explicitly reviewing that point, but it is just kind of implicit in everything later.
I guarantee that a naive presentation doesn't actually include the axioms, and doesn't address the philosophical questions dividing formalism from constructivism.
Uncountable need not mean more. It can mean that there are things that you can't figure out whether to count, because they are undecidable.
If you're trying to find mistakes in the logic does it not make sense to push it at its bounds? Look at the Banach-Tarski Paradox. Sure, normal people hear about it and go "oh wow, cool." But when it was presented in my math course it was used as a discussion of why we might want to question the Axiom of Choice, but that removing it creates new concerns. Really the "paradox" was explored to push the bounds of the axiom of choice in the first place. They asked "can this axiom be abused?" And the answer is yes. Now the question is "does this matter, since infinity is non-physical? Or does it despite infinity being non-physics?"
You seem to think mathematicians, physicists, and scientists in general believe infinities are physical. As one of those people, I'm not sure why you think that. We don't. I mean math is a language. A language used because it is pedantic and precise. Much the same way we use programming languages. I'm not so sure why you're upset that people are trying to push the bounds of the language and find out what works and doesn't work. Or are you upset that non-professionals misunderstand the nuances of a field? Well... that's a whole other conversation, isn't it...
Your guesses at what I seem to think are completely off base and insulting.
When I say "modern math courses", I mean like the standard courses that most future mathematicians take on their way to various degrees. For all that we mumble ZFC, it is darned easy to get a PhD in mathematics without actually learning the axioms of ZFC. And without learning anything about the historical debates in the foundations of mathematics.
Honestly it's difficult to understand exactly what you're arguing. Because I understand laymen not understanding your argument about infinities not being real (and even many HN users don't understand code is math bit a CS degree doesn't take you far in math. Some calc and maybe lin alg) but are we concerned about laymen? I too am frustrated by nonexperts having strong opinions and having difficulties updating them, but that's not a culture problem. We're on HN and we know the CS stereotypes, right?
If instead you're talking about experts then I learned about what you're talking about in my Linear 2 course in a physics undergrad and have seen the topic appear many times since even outside my own reading of set theory. The axiom of choice seems to have even entered more main stream nerd knowledge. It's very hard to learn why AoC is a problem without learning about how infinities can be abused. But honestly I don't know any person that's even an amateur mathematician that thinks infinities are physical
The fact that you think I'm talking about the axiom of choice, demonstrates that you didn't understand what I'm talking about. I would also be willing to bet a reasonable sum of money that this topic did not come up in your Linear 2 course in physics undergrad.
The arguments between the different schools of philosophy in math are something that most professional mathematicians are unaware of. Those who know about them, generally learned them while learning about either the history of math, or the philosophy of math. I personally only became aware of them while reading https://www.amazon.com/Mathematical-Experience-Phillip-J-Dav.... I didn't learn more about the topic until I was in grad school, and that was from personal conversations. It was never covered in any course that I took on, either in undergraduate or graduate schools.
Now I'm curious. Was there anything that I said that should have been said more clearly? Or was it hard to understand because you were trying to fit what I said into what you know about an entirely unrelated debate about the axiom of choice?
The reason I brought up AoC is because it is a common way to learn about the abuse of infinity and where axioms need be discussed. Both things you brought up. I think you are reading further into this than I intended.
Is this a joke?When someone says
That's your chance to explain. It is someone explicitly saying... I'm trying to understand but you are not communicating efficiently.This is even more frustrating as you keep pointing out that this is not common knowledge. So why are you also communicating like it is?! If it is something so few know about then be fucking clear. Don't make anyone guess. Don't link a book, use your own words and link a book if you want to suggest further reading, but not "this is the entire concept I'm talking about". Otherwise we just have to guess and you getting pissed off that we guess wrong is just down right your own fault.
So stop shooting yourself in the foot and blaming others. If people aren't understanding you, try assuming they can't read your mind and don't have the exact same knowledge you do. Talk about fundamental principles...
I've had a lot of chances to explain. I've posted a lot of explanations. For example see https://news.ycombinator.com/item?id=45435534 for an explanation that I posted 11 hours ago. See https://plato.stanford.edu/entries/mathematics-constructive/ for a link that I gave. See https://news.ycombinator.com/item?id=45434701 for someone with a different point of view, attempting to explain the same key point.
That point being that what we mean by "exists" is fundamentally a philosophical question. And our conclusions about what mathematical things exist will depend on how we answer that question. And very specifically, there are well-studied mathematical philosophies in which uncountable sets do not have larger cardinalities than countable ones.
If none of those explanations wind up being clear for you, then I'm going to need feedback from you to have a chance to explain this to you. Because you haven't told me enough for me to make any reasonable guess what the sticking point is between you and understanding. And without that, I have no chance of guessing what would clarify this for you.
The "philosophical questions" dividing formalism from constructivism are greatly overstated. The point of having those degrees of undecidability or uncountability is precisely to be able to say things like "even if you happen to be operating under strong additional assumptions that let you decide/count X, that still doesn't let you decide/count Y in general." That's what formalism is: a handy way of making statements about what you can't do constructively in the general case.
To be fair, constructivists tend to prefer talk about different "universes" as opposed to different "sizes" of sets, but that's all it is: little more than a mere difference in terminology! You can show equiconsistency statements across these different points of view.
Yes, you can show such equiconsistency statements. As Gödel proved, for any set of classical axioms, there is a corresponding set of intuitionistic axioms. And if the classical axioms are inconsistent, then so is the intuitionistic equivalent. (Given that intuitionistic reasoning is classically valid, an inconsistency in the intuitionistic axioms trivially gives you one in the classical axioms.)
So the care that intuitionists take does not lead to any improvement in consistency.
However the two approaches lead to very different notions of what it means for something to mathematically exist. Despite the formal correspondences, they lead to very different concepts of mathematics.
I'm firmly of the belief that constructivism leads to concepts of existence that better fit the lay public than formalism does.
Probably not. But this one time we had an argument and I made a statement along the lines of "I'm right, naturally." She went irrational. I lost the argument.
QED
LOL
If she laughs at that kind of thing, I can see why you married her.
You don't need an implicit philosophical assumption, you just need to define what an infinity is and the comparison method.
This looks like a philosophical stance in the philosophy of mathematics actually, and it's called formalism
Here's a hint. When someone makes a reference to something that was actively debated for decades, and you're not familiar with said debates, you should probably assume that you're missing some piece of relevant knowledge.
https://plato.stanford.edu/entries/mathematics-constructive/ is one place that you could start filling in that gap.
What leaps are "unprovable"? I'm curious, that doesn't sound right.
For sure there are valid arguments on whether or not to use certain axioms which allow or disallow some set theoretical constructions, but given ZFC, is there anything that follows that is unprovable?
When you say "given ZFC", you're assuming a lot. Including a notion of mathematical existence which bears little relation to any concept that most lay people have of what mathematical existence might mean.
In particular, you have made sufficient assumptions to prove that almost all real numbers that exist can never be specified in any possible finite description. In what sense do they exist? You also wind up with weirder things. Such as well-specified finite problems that provably have a polynomial time algorithm to solve...but for which it is impossible to find or verify that algorithm, or put an upper bound on the constants in the algorithm. In what sense does that algorithm exist, and is finite?
Does that sound impossible? An example of an open problem whose algorithm may have those characteristics is an algorithm to decide which graphs can be drawn on a torus without any self-crossings.
If our notion of "exists" is "constructable", all possible mathematical things can fit inside of a countable universe. No set can have more than that.
> When you say "given ZFC", you're assuming a lot.
Errr, I'm just assuming the axioms of ZFC. That's literally all I'm doing.
> In what sense do [numbers that can't be finitely specified] exist?
In the sense that we can describe rules that lead to them, and describe how to work with them.
I understand that you're trying to tie the notion of "existence" to constructability, and that's fine. That's one way to play the game. Another is to use ZFC and be fine with "weird, unintuitive to laypeople" outcomes. Both are interesting and valid things to do IMO. I'm just not sure why one is obviously "better" or "more real" or something. At the end, it's all just coming up with rules and figuring out what comes out of them.
My point is that going from a lay understanding of mathematics to "just accept ZFC" means jumping past a variety of debatable philosophical points, and accepting a standard collection of answers to them. Mathematicians gloss over that.
Yeah, I think that's fair.
On the other hand, I think it's really cool to teach laypeople about things like "sizes of infinities", etc. They are deep math concepts that can be taught with relatively simple analogies that most people understand, and they're interesting things to know. I know that I personally loved learning about them as a kid, before I had almost any knowledge of math - it's one of the reasons that while I initially didn't connect with other areas of math, I found set theory delightful as a kid.
I just feel like if you need to first walk people through a bunch of philosophical back and forth on constructionism, you'll never get to the fun stuff.
We each find different things delightful. What I like, you may not. And vice versa.
But it is easy to present deep ideas from constructivism, without mentioning the word constructivism. Or even acknowledging that the philosophy exists.
For example the second half of https://math.stackexchange.com/questions/5074503/can-pa-prov... is an important constructivist thing. It shows why everything that a constructivist could ever be interested in mathematically, can be embedded in the natural numbers. With all of the constructions needing nothing more than the Peano Axioms. (Proving the results may need stronger axioms though...)
From my point of view, https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach does something similar. That book got a lot of people interested in basic concepts around recursion, computation, and what it means to think. Absolutely everything in it works constructively. And yet that philosophy is not mentioned. Not even once.
The only point where a constructivist need discuss all of the philosophical back and forth on constructivism, is in explaining why a constructivist need not accept various claims coming out of classical mathematics. And even that discussion would not be so painful if people who have learned classical mathematics were more aware of the philosophical assumptions that they are making.
> We each find different things delightful. What I like, you may not. And vice versa.
To be honest, I don't feel like I know enough about the constructivist philosophy. What would be a good place to start if I want to learn more about it?
I haven't yet read your PA proving Goodstein sequences article, though I have skimmed it and it is, indeed, super interesting.
And for the record, Godel, Escher, Bach was probably the single most important influence on me even starting to get interested in computation, etc.
> Errr, I'm just assuming the axioms of ZFC. That's literally all I'm doing.
ZFC (and its underlying classical logic) is precisely the problem here though
You are being very cryptic. Are you trying to say that the existence of uncountable sets requires the axiom of choice? If you are, that's false. If you aren't, I'm not sure what you are trying to say.
I'm definitely not trying to say that the existence of uncountable sets requires the axiom of choice. Cantor's diagonalization argument for the reals demonstrates otherwise.
I'm saying that to go from the uncountability of the reals to the idea that this implies that the infinity of the reals is larger, requires making some important philosophical assumptions. Constructivism demonstrates that uncountable need not mean more.
On the algorithm example, you could have asked what I was referring to.
The result that I was referencing follows from the https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour_theo.... The theorem says that any class of finite graphs which is closed under graph minors, must be completely characterized by a finite set of forbidden minors. Given that set of forbidden minors, we can construct a polynomial time test for membership in the class - just test each forbidden minor in turn.
The problem is that the theorem is nonconstructive. While it classically proves that the set exists, it provides no way to find it. Worse yet, it can be proven that in general there is no way to find or verify the minimal solution. Or even to provide an upper bound on the number of forbidden minors that will be required.
This need not hold in special cases. For example planar graphs are characterized by 2 forbidden minors.
For the toroidal graphs, as https://en.wikipedia.org/wiki/Toroidal_graph will verify, the list of known forbidden minors currently has 17,523 graphs. We have no idea how many more there will be. Nor do we have any reason to believe that it is possible to verify the complete list in ZFC. Therefore the polynomial time algorithm that Robinson-Seymour says must exist, does not seem to exist in any meaningful and useful way. Such as, for example, being findable or provably correct from ZFC.
He never mentioned the Axiom of Choice. I think he articulated his opinion clearly enough. It's his own subjective value judgement.
I don't think either of us think what he wrote is subjective or an opinion. they seem like pretty definitely truth claims to me.
> you're assuming a lot. Including a notion of mathematical existence which bears little relation to any concept that most lay people have of what mathematical existence might mean.
John Horton Conway:
> It's a funny thing that happens with mathematicians. What's the ontology of mathematical things? How do they exist? In what sense do they exist? There's no doubt that they do exist but you can't poke and prod them except by thinking about them. It's quite astonishing and I still don't understand it, having been a mathematician all my life. How can things be there without actually being there? There's no doubt that 2 is there or 3 or the square root of omega. They're very real things. I still don't know the sense in which mathematical objects exist, but they do. Of course, it's hard to say in what sense a cat is out there, too, but we know it is, very definitely. Cats have a stubborn reality but maybe numbers are stubborner still. You can't push a cat in a direction it doesn't want to go. You can't do it with a number either.
> In what sense do they exist?
In the sense that all statements of non-constructive "existence" are made, viz. "you can't prove that they don't exist in the general case", so you are allowed to work under the stronger assumption that they also exist constructively, without any contradiction resulting. That can certainly be useful in some applications.
Sure, we can choose to work in a set of axioms that says that there exists an oracle that can solve the Halting problem.
But the fact that such systems don't create contradictions emphatically *DOES NOT* demonstrate the constructive existence of such an oracle. Doubly not given that in various usual constructivist systems, it is easily provable that nothing that exists can serve as such an oracle.
If such a system proved that the answer to some decidable question was x, when the actual answer was y, then the system would prove a contradiction. If the system doesn’t prove a contradiction, then that situation doesn’t happen, so you can trust its answers to decidable questions.
If the only questions you accept as meaningful are the decidable ones, then you can trust its answers for all the questions you accept as meaningful and for which it has answers.
Also, “provable that nothing that exists can serve as such an oracle” seems pretty presumptive about what things can exist? Shouldn’t that be more like, “nothing which can be given in such-and-such way (essentially, no computable procedure) can be such an oracle”?
Why treat it as axiomatic that nothing that isn’t Turing-computable can exist? It seems unlikely that any finite physical object can compute any deterministic non-Turing-computable function (because it seems like state spaces for bounded regions of space have bounded dimension), but that’s not something that should be a priori, I think.
I guess it wouldn’t really be verifiable if such a machine did exist, because we would have no way to confirm that it never errs? Ah, wait, no, maybe using the MIP* = RE result, maybe we could in principle use that to test it?
You're literally talking about how I should regard the hypothetical answers that might be produced by something that I think doesn't exist. There's a pretty clear case of putting the cart before the horse here.
On being presumptive about what things can exist, that's the whole point of constructivism. Things only exist when you can construct them.
We start with things that everyone accepts, like the natural numbers. We add to that all of the mathematical entities that can be constructed from those things. This provides us with a closed and countable universe of possible mathematical entities. We have a pretty clear notion of what it means for something in this universe to exist. We cannot be convinced of the existence of anything that is outside of the universe without making extra philosophical assumptions. Philosophical assumptions of exactly the kind that constructivists do not like.
This constructible universe includes a model of computation that fits Turing machines. But it does not contain the ability to describe or run any procedure that can't fit onto a Turing machine.
Therefore an oracle to decide the Halting problem does not exist within the constructible universe. And so your ability to imagine such an oracle, won't convince a constructivist to accept its existence.
You can think that something doesn't exist in the general case, while still allowing that it might exist in unspecified narrow cases where additional constraints could apply. For example, there might be algorithms that can decide the halting problem for some non-Turing complete class of programs. Being able to talk in full generality about how such special cases might work is the whole point of non-constructive reasoning. It's "non-constructive" in that it states "I'm not going to construct this just yet".
Well yes. We can certainly make a function that acts something like that oracle in some special cases. But my point was to give an example of something that cannot be constructively created. The oracle that I described cannot exist within the universe of constructable things.
> Therefore an oracle to decide the Halting problem does not exist within the constructible universe.
I might be confused here, but isn't an Oracle to decide the halting problem something that everyone agrees doesn't exist?
The whole idea is for this to be a thought experiment. "If we magically had a way to decide the halting problem, how would that affect things" seems like a normal hypothetical question.
You literally cannot doubt the existence of this oracle, without doubting what existence means in classical mathematics.
Here is why a classical mathematician would say that this oracle exists.
Let f(program, input, n) be 1 or 0 depending on whether the program program, given input input, is still running at step n. This is a perfectly well-behaved mathematical function. In fact it is a computable one - we can compute it by merely running a simulation of a computer for a fixed number of steps.
Let oracle(program, input) be the limit, as n goes to infinity, of f(program, input, n). Classically this limit always exists, and always gives us 0 or 1. The fact that we happen to be unable to compute it, doesn't change the fact that this is a perfectly well-defined function according to classical mathematics.
If you give up the existence of this oracle, you might as well give up the existence of any real numbers that do not have a finite description. Which is to say, almost all of them. Why? Because the set of finite descriptions is countable, and therefore the set of real numbers that admit a finite description is also only countable. But there are an uncountable number of real numbers, so almost all real numbers do not admit a finite description.
The real question isn't whether this oracle exists. It is what you want the word "exists" to mean.
Hmm. Interesting.
If I'm following you, then most "mathematical" CS is based on constructivist foundations? E.g. while a halting problem Oracle might "exist" in the mathematical sense, it's not considered to "exist" for most purposes of deciding complexity classes, etc.
> The real question isn't whether this oracle exists. It is what you want the word "exists" to mean.
I was going to say the same thing. I'm not sure what "exists" means in some of these discussions.
It would be more accurate to say that most mathematical CS fits inside of constructivist foundations. Of course it also fits inside of classical foundations. So someone with constructivist inclinations may be drawn to that field. But participation in that field doesn't make you a constructivist.
As for what exists means, here are the three basic philosophies of mathematics.
The oldest is Platonism. It is the belief that mathematics is real, and we are trying to discover the right way to do it. Ours is not to understand how it is to exist, it is to try to figure out what actually exists. Kurt Gödel is a good example of someone who argued for this. See https://journals.openedition.org/philosophiascientiae/661 for a more detailed exploration of his views, and how they changed over time. (His Platonism does seem to have softened over time.)
Historically this philosophy is rooted in Plato's theory of Forms. Where our real world reflects an ideal world created by a divine Demiurge. With the rise of Christianity, that divine being is obviously God. This fit well with the common idea during the Scientific Revolution that the study of science and mathematics was an exploration of the mind of God.
Formalism dates back to David Hilbert. In Hilbert's own description, it reduces mathematics to formal symbol manipulation according to formal rules. It's a game to figure out what the consequences are of the axioms that were chosen. As for existence, "If the arbitrarily posited axioms together with all their consequences do not contradict each other, then they are true and the things defined by these axioms exist. For me, this is the criterion of truth and existence." See page 39 of https://philsci-archive.pitt.edu/17600/1/bde.pdf for a reference.
In other words if we make up any set of axioms and they don't contradict each other, the things that those axioms define have mathematical existence. Whether or not we can individually describe those things, or learn about them.
Over on the constructivist side of the fence, there are a wide range of possible views. But they share the idea that mathematical things can only exist when there is a way to construct them. But that begs the question.
Finitism only accepts the existence of finite things. In an extreme form, even the set of natural numbers doesn't exist. Only individual natural numbers. Goodstein of the Goodstein sequence is a good example of a finitist.
Intuitionism has the view that mathematics only exists in the minds of men. Anything not accessible to the minds of men, doesn't exist. The best known adherent of this philosophy is Brouwer.
My sympathies generally lie with the Russian school, founded by Markov. (Yes, the Markov that Markov chains are named after.) It roots mathematics in computability.
Erret Bishop is an example of a more pragmatic version of constructivism. Rather than focus on the philosophical claims, he pragmatically focuses on what can be demonstrated constructively. https://www.amazon.com/Foundations-Constructive-Analysis-Err... is his best known work.
Everyone agrees that you can't write an algorithm for a Turing machine (or computational equivalent) that decides the Halting problem for Turing machines in every case. Since this is explicitly worded as "you can't write an algorithm for..." it's in fact talking about a kind of constructive existence and saying that it doesn't apply. The oracle concept is normally phrased about "If we magically had a way to decide this undecidable problem in every case" but it's real utility from a constructive POV is talking about special cases that you haven't bothered to narrow down just yet.
> Things only exist when you can construct them.
This is exactly what I’m saying is presumptive! If constructivism is to earn the merit of being less presumptive by virtue of not assuming the existence of various things, it should also not assume the non-existence of those things.
Which, I think many visions of constructivism do earn this merit, but not your description of it.
The underlying problem is that constructivism and non-constructive reasoning are using the word "exists" (and, relatedly, the logical disjunction) to mean very different things. The constructive meaning for "exists" is certainly more intuitive, so it makes sense that constructivists would want it by 'default'; but the non-constructive operator (which a constructivist would preferably understand as "is merely allowed to exist"), while somewhat more subtle, has a usefulness of its own.
So having a different philosophy from you makes me presumptive?
What makes you presume that you have any business telling someone with different beliefs from you, what is OK to believe? You may believe in the existence of whatever you like. Whether that be numbers that cannot be specified, or invisible pink unicorns.
I'll be over in the corner saying that your belief does not compel me to agree with you on the question of what exists. Not when your belief follows from formalism, which explicitly abandons any pretense of meaningfulness to its abstract symbol manipulation.
No, that’s not what I said. Thinking you can determine a-priori that something that is logically self-consistent, cannot exist, if there is no reason that such a thing being physically instantiated would imply a logical contradiction, is the thing I think is presumptive.
Merely believing that such a thing (a halting oracle) doesn’t exist, isn’t something I meant to call presumptive, only believing that you can know a-priori (with certainty) that such things cannot exist.
I don’t claim that you are obligated to agree with me that they do exist. Someone who believes they don’t, but doesn’t believe they can know this as certain a-priori knowledge, would be no more presumptive than I am, and someone who is agnostic on the question of whether they exist would be less presumptive than I am.
Also, I disagree with your notion of “meaningfulness”. At a minimum, all statements in the arithmetic hierarchy are meaningful. The continuum hypothesis might in a certain sense not be meaningful.
> Merely believing that such a thing (a halting oracle) doesn’t exist, isn’t something I meant to call presumptive, only believing that you can know a-priori (with certainty) that such things cannot exist.
If you think that I was making that case, then you have misunderstood something important.
Constructivism is a statement about what kinds of arguments will convince me that things exist.
Could things exist that I don't believe in? Absolutely! There could well be a bank account with my name on it that I don't know about. Its existence is possible, and my lack of belief in it is no skin off of its back. But I still don't believe that it exists.
Similarly, the Platonists could be correct. There could be an omniscient God whose perfect mind gives existence to a perfect system of mathematics, beyond human comprehension. I have no way to prove that there isn't such a God, and therefore that there isn't such a perfect mathematics.
However the potential for such things to exist is a point of theology. I do not believe in their existence. Just as I do not believe in the existence of Santa. In neither case can I prove that they don't exist. And if you choose to believe in them, that's your business. Not mine.
There is nothing presumptive in my laying out the rules of reason that I will accept as convincing to me. There is a lot of presumption if anyone else comes along and tells me that I should think differently about unprovable propositions.
Now it happens to be the case that from the rules of reason that I use, I provably can't be convinced of the existence of certain things. That's a mathematical theorem. But the fact that I can't be convinced, doesn't prove that you shouldn't be convinced. You are free to be convinced of all of the unprovable assertions that you wish. And it is also true that on something like this, I have no way to convince you that it doesn't exist.
On meaningfulness, meaning is in the eye of the beholder. For example there are people who are willing to pay a million dollars for a century old stamp which was misprinted with the airplane upside-down. (See https://en.wikipedia.org/wiki/Inverted_Jenny to verify that.) They clearly find great meaning in that stamp. But I don't.
So again, you're free to find meaning in whatever you want. But you're in the wrong to object that I don't find meaning in what you consider important.
> emphatically DOES NOT demonstrate the constructive existence of such an oracle
Of course, but it shows that you can assume that such an oracle exists whenever you are working under additional conditions where the existence of such a "special case" oracle makes sense to you, even though you can't show its existence in the general case. This outlook generalizes to all non-constructive existence statements (and disjunctive statements, as appropriate). It's emphatically not the same as constructive existence, but it can nonetheless be useful.
That's like asserting the existence of a bank account in my name with a billion dollars in it that I know nothing about.
I won't ever be able to find a contradiction from that claim, because I have no way to find that bank account if it exists.
But that argument also won't convince me that the bank account exists.
That argument ought to convince you that there's a mere "possible world" where that bank account turns out to exist. Sometimes we are implicitly interested in these special-cased "possible worlds", even though they'll involve conditions that we aren't quite sure about. Non-constructive existence is nothing more than a handy way of talking about such things, compared to the constructively correct "it's not the case that the existence of X is always falsified".
It would be weird for a constructivist to be interested in a possible world that they don't believe exists.
Theoretically possible? Sure. But the kinds of questions that lead you there are generally in opposition to the kinds of principles that lead someone to prefer constructivism.
It might be relevant to look at this: https://home.sandiego.edu/~shulman/papers/jmm2022-complement...
Also this: https://arxiv.org/pdf/1212.6543
Assuming you haven't looked at these already, of course.
I had already read the second. I'm not so enthused about the first.
Don’t worry, we have only decided that there are two sizes of Infinitis- normal ones and really big ones.
[dead]
>Today, mathematics is regarded as an abstract science.
Pure mathematics is regarded as an abstract science, which it is by definition. Arnol'd argued vehemently and much more convincingly for the viewpoint that all mathematics is (and must be) linked to the natural sciences.
>On forums such as Stack Exchange, trained mathematicians may sneer at newcomers who ask for intuitive explanations of mathematical constructs.
Mathematicians use intuition routinely at all levels of investigation. This is captured for example by Tao's famous stages of rigour (https://terrytao.wordpress.com/career-advice/theres-more-to-...). Mathematicians require that their intuition is useful for mathematics: if intuition disagrees with rigour, the intuition must be discarded or modified so that it becomes a sharper, more useful razor. If intuition leads one to believe and pursue false mathematical statements, then it isn't (mathematical) intuition after all. Most beginners in mathematics do not have the knowledge to discern the difference (because mathematics is very subtle) and many experts lack the patience required to help navigate beginners through building (and appreciating the importance of) that intuition.
The next paragraph about how mathematics was closely coupled to reality for most of history and only recently with our understanding of infinite sets became too abstract is not really at all accurate of the history of mathematics. Euclid's Elements is 2300 years old and is presented in a completely abstract way.
The mainstream view in mathematics is that infinite sets, especially ones as pedestrian as the naturals or the reals, are not particularly weird after all. Once one develops the aforementioned mathematical intuition (that is, once one discards the naive, human-centric notion that our intuition about finite things should be the "correct" lens through which to understand infinite things, and instead allows our rigorous understanding of infinite sets to inform our intuition for what to expect) the confusion fades away like a mirage. That process occurs for all abstract parts of mathematics as one comes to appreciate them (expect, possibly, for things like spectral sequences).
I agree in general but
> Euclid's Elements is 2300 years old and is presented in a completely abstract way.
depends on what you mean by completely abstract. Euclid relies in a logically essential way on the diagrams. Even the first theorem doesn't follow from the postulates as explicitly stated, but relies on the diagram for us to conclude that two circles sharing a radius intersect.
This is a thought-provoking paper on the issue by Viktor Blasjo, Operationalism: An Interpretation of the Philosophy of Ancient Greek Geometry https://link.springer.com/article/10.1007/s10699-021-09791-4
which was recently the subject of a guest video on 3blue1brown https://www.youtube.com/watch?v=M-MgQC6z3VU
> Pure mathematics is regarded as an abstract science, which it is by definition.
I'd argue that, by definition, mathemtatics is not, and cannot be, a science. Mathematics deals with provable truths, science cannot prove truth and must deal falsifiability instead.
You could turn the argument around and say that math must be a science because it builds on falsifiable hypotheses and makes testable predictions.
In the end arguing about whether mathematics is a science or not makes no more sense than bickering about tomates being fruit; can be answered both yes and no using reasonable definitions.
> In the end arguing about whether mathematics is a science or not makes no more sense than bickering about tomates being fruit
That's the thing, though — It does make sense, and it's an important distinction. There is a reason why "mathematical certainty" is an idiom — we collectively understand that maths is in the business of irrefutable truths. I find that a large part of science skepticism comes from the fundamental misunderstanding that science is, like maths, in the business of irrefutable truths, when it is actually in the business of temporarily holding things as true until they're proven false. Because of this misunderstanding, skeptics assume that science being proven wrong is a deathblow to science itself instead of being an integral part of the process.
In general you aren't testing as an empiricist though, you are looking for a rational argument to prove or disprove something.
The practical experience of doing mathematics is actually quite close to a natural science, even if the subject is technically a "formal science* according to the conventional meanings of the terms.
Mathematicians actually do the same thing as scientists: hypothesis building by extensive investigation of examples. Looking for examples which catch the boundary of established knowledge and try to break existing assumptions, etc. The difference comes after that in the nature of the concluding argument. A scientist performs experiments to validate or refute the hypothesis, establishing scientific proof (a kind of conditional or statistical truth required only to hold up to certain conditions, those upon which the claim was tested). A mathematician finds and writes a proof or creates a counter example.
The failure of logical positivism and the rise of Popperian philosophy is obviously correct that we can't approach that end process in the natural sciences the way we do for maths, but the practical distinction between the subjects is not so clear.
This is all without mention the much tighter coupling between the two modes of investigation at the boundary between maths and science in subjects like theoretical physics. There the line blurs almost completely and a major tool used by genuine physicists is literally purusiing mathematical consistency in their theories. This has been used to tremendous success (GR, Yang-Mills, the weak force) and with some difficulties (string theory).
————
Einstein understood all this:
> If, then, it is true that the axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented, can we ever hope to find the right way? Nay, more, has this right way any existence outside our illusions? Can we hope to be guided safely by experience at all when there exist theories (such as classical mechanics) which to a large extent do justice to experience, without getting to the root of the matter? I answer without hesitation that there is, in my opinion, a right way, and that we are capable of finding it. Our experience hitherto justifies us in believing that nature is the realisation of the simplest conceivable mathematical ideas. I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed. - Albert Einstein
An alternative to abstraction is to use iconic forms and boundary math (containerization and void-based reasoning). See Laws of Form and William Bricken's books recently. Using a unary operator instead of binary (Boolean) does indeed seem simpler, in keeping with Nature. Introduction: https://www.frontiersin.org/journals/psychology/articles/10....
Mathematical "truth" all depends on what axioms you start with. So, in a sense, it doesn't prove "truth" either - just systemic consistency[1] given those starting axioms. Science at least grapples with observable phenomena in the universe.
[1] And even this has limits: https://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theor...
Mathematical proofs are checked by noisy finite computational machines (humans). Even computer proofs' inputs-outputs are interpreted by humans. Your uncertainty in a theorem is lower bounded by the inherent error rate of human brains.
I agree we can't be absolutely certain of anything (maybe none of our memories are real and we just popped into existence etc.)
But we can be more sure of the deductive validity of a proof than we can be of any of the claims you make in these sentences, so I don't think they can serve to establish any doubt. If we're wrong about deductive logic, then we can only be more wrong about any empirical claims, which rely on deductive logic plus empirical observations
This may be, but not, I think, in a way that is particularly worth modeling?
When we try to model something probabilistically, it is usually not a great idea to model the probability that we made an error in our probability calculations as part of our calculations of the probability.
Ultimately, we must act. It does no good to suppose that “perhaps all of our beliefs are incoherent and we are utterly incapable of reason”.
Plenty of mathematical proofs have been proven true with 100% certainty. Complicated proofs that involve a lot of steps and checking can have errors. They can also be proven true if exhaustively checked.
> Plenty of mathematical proofs have been proven true with 100% certainty
Solipsists would like to have a word with you...
Tell them to mind their own business ;
You're saying maybe people have mistakenly accepted incorrect proofs now and again, so some theorems that people think are proven are unproven. I agree that this seems very likely.
In practice when proofs of research mathematics are checked, they go out to like 4 grad students. This isn't a very glamorous job for those grad students. If they agree then it's considered correct...
But note this is just the bleeding edge stuff. The basic stuff is checked and reproven by every math undergrad that learns math. Literally millions of people have checked all the proofs. As long as something is taught in university somewhere, all the people who are learning it (well, all the ones who do it well) are proving / checking the theory.
Anyway, when the scientific community accepts a bad proof what effectively happens is that we've just added an extra axiom.
Like when you deliberately add new axioms, there are 3 cases
- Axiom is redundant: it can be proven from the other axioms. (this is ... relatively fine? we tricked ourselves into believing something that is true is true, the reason is just bad.)
This can get discovered when people try to adapt the bad proof to prove other things and fail.
Also people find and publish and "more interesting", "different" proofs for old theorems all the time. Now you have redundancy.
- Axiom contradicts other axioms: We can now prove p and not p.
I wonder if this has ever happened? I.e. people proving contradictions, leading them to discover that a generally accepted theorem's proof is incorrect. It must have happened a few times in history, no?
o/c maybe the reason this hasn't happened is that the whole logical foundation of mathematics is new, dating back to the hilbert program (1920s).
There are well known instances of "proofs" being overturned before that, but they're not strictly logically proofs in the hilbert-program sense, just arguments. (Of course they contain most of the work and ideas that would go into a correct proof, and if you understand them you can do a modern proof)
e.g. https://mathoverflow.net/a/35558
Cauchys proof that, if a sequence of continuous functions converges [pointwise] to a function, the limit function is also continuous (cauchys proof only holds for uniform convergence, not pointwise convergence - but people didnt really know the difference at the time)
- Axiom is independent of other axioms: You can't prove or disprove the theorem.
English doesn't have a "I'm just hypothesizing all of this" voice, if it did exist this post should be in it. I didn't do enough research to answer your question. Some of the above may be wrong, e.g. the part about the 4 grad students. One should probably look for historical examples.
Mathematics is a science of formal systems. Proofs are its experiments, axioms its assumptions. Both math and science test consistency—one internally, the other against nature. Different methods, same spirit of systematic inquiry.
It's not an empirical science, but it is a science, where "science" means any systematic body of knowledge of an aspect of a thing and its causes under a certain method. (In that sense, most of what are considered scientific fields are families of sciences.) Mathematics is what you'd call a formal science with formal structure and quantity as its object of study and deductive inference and analysis as its primary methods (the cause of greatest interest is the formal cause).
A proof is just an argument that something is true. Ideally, you've made an extremely strong argument, but it's still a human making a claim something is true. Plenty of published proofs have been shown to be false.
Math is scientific in the sense that you've proposed a hypothesis, and others can test it.
The difference is that in mathematics you only have to check the argument. In the empirical sciences you have to both check the argument and also test the conclusion against observations
> In the empirical sciences you have to both check the argument and also test the conclusion against observations
That isn't true, you just test new axioms but most stuff we do in empirical sciences don't require new axioms.
The only difference between material sciences and math is that in math you don't test axioms while in empirical sciences you do.
Empirical science uses both deductive logic to make predictions, and observations to check those predictions. I'm not saying that's all it involves. Not sure which part of that you disagree with
And a lot of what goes on in foundations of mathematics could be described as "testing the axioms", i.e. identifying which theorems require which axioms, what are the consequences of removing, adding, or modifying axioms, etc.
Difference is mathematical arguments can be shown to be provably true when exhaustively checked (which is straight forward with simpler proofs). Something you don't get with the empirical sciences.
Also the empirical part means natural phenomena needs to be involved. Math can be purely abstract.
You're making a strong argument if you believe you checked every possibility, but it's still just an argument.
If you want to escape human fallibility, I'm afraid you're going to need divine intervention. Works checked as carefully as possible still seem to frequently feature corrections.
It isn't, that's why it's in own section in STEM, and rightfully so. It's a higher tool that without it, science would come to a screeching halt.
Somewhat tangential to the discussion: I have once read that Richard Feynman was opposed to the idea (originally due to Karl Popper) that falsifiability is central to physics, but I haven't read any explanation.
I'm not sure if it deals only with provable truths? It even deals with the concept of unprovability itself, if the incompleteness theorem is considered part of mathematics
Yes, but Godel proved the incompleteness theorem, by ingeniously finding ways to prove things about unprovability.
The incompleteness theorem doesn't say that there are statements which are unprovable in any absolute sense. What it says is that given a formal system, there will always be statements which that particular formal system can't prove. But in fact as part of the proof, Godel proves this statement, just not by deriving it in the formal system in question (obviously, since that's what he's proving is impossible).
The way this is done is by using a "metalanguage" to talk about the formal theory in question. In this case it's a kind of ambient set theory. Of course, the proof also implies that if this ambient metalanguage is formalized then there will be sentences which it can't prove either, but these in general will be different sentences for each formalized theory.
Science involves both deductive and inductive reasoning. I would in turn argue that mathematics is a science that focuses heavily (but not entirely) on deductive reasoning.
He probably means science in a wider sense as opposed to the anglo-american narrower sense where science is just physics, chemistry, biology and similar topics.
Pure mathematics is just symbol pushing and can never be science. It is lot of fun though and as it turned out occasionally pretty useful for science.
It is absolutely a science, a formal science. What it isn't is an empirical science.
The "symbol pushing" is a methodological tool, and a very useful one that opened up the possibility of new expansive fields of mathematics.
(Of course, it is important to always distinguish between properties of the abstraction or the tool from the object of study.)
Well, we are talking about pure mathematics and there is not much Popperian scientific method in it.
Who cares? That's just semantics. If we define science as the systematic search for truths, then mathematics and logic are the paradigmic sciences. If we define it as only empirical search for truth then perhaps that excludes mathematics, but it's an entirely unintersting point, since it says nothing.
Not only is intuition important (or the entire point; anyone with some basic training or even a computer can follow rules to do formal symbol manipulation. It's the intuition for what symbol manipulation to do when that's interesting), but it is literally discussed in a helpful, nonjudgmental way on Math Stack Exchange. e.g.
https://math.stackexchange.com/questions/31859/what-concept-...
Other great sources for quick intuition checks are Wikipedia and now LLMs, but mainly through putting in the work to discover the nuances that exist or learning related topics to develop that wider context for yourself.
> The next paragraph about how mathematics was closely coupled to reality for most of history and only recently with our understanding of infinite sets became too abstract is not really at all accurate of the history of mathematics. Euclid's Elements is 2300 years old and is presented in a completely abstract way.
I may be off-base as an outsider to mathematics, but Euclid’s Elements, per my understanding, is very much grounded in the physical reality of the shapes and relationships he describes, if you were to physically construct them.
Quite the opposite, Plato, several hundred years before Euclid was already talking about geometry as abstract, and indeed the world of ideas and mathematics as being _more real_ than the physical world, and Euclid is very much in that tradition.
I am going to quote from the _very beginning_ of the elements:
Definition 1. A point is that which has no part. Definition 2. A line is breadthless length.
Both of these two definitions are impossible to construct physically right off the bat.
All of the physically realized constructions of shapes were considered to basically be shadows of an idealized form of them.
Another point to keep in mind is that a lot of mathematics that's not considered abstract _now_ was definitely considered "hopelessly" abstract at the time of its conception.
The complex number system started being explored by the greeks long before any notion of the value of complex spaces existed, and could be mapped to something in reality.
I don't think we can say the Greeks were exploring complex numbers. There's something about Diophantus finding a way to combine two right-angled triangles to produce a third triangle whose hypotenuse is the product of the hypotenuses of the first two triangles. He finds an identity that's equivalent to complex multiplication, but this is because complex multiplication has a straighforward geometric interpretation in the plane that corresponds to this way of combining triangles.
There's a nice (brief) discussion in section 20.2 of Stillwell's Mathematics and its History
Hell, 0 used to be considered too abstract!
Plato was only about a generation before Euclid. Their lives might have even overlapped, or nearly so: Plato died in 347BC and Euclid's dates aren't known but the Elements is generally dated ~300BC
The only things that are weird in math are things that would not be expected after understanding the definitions. A lot of the early hurdles in mathematics are just learning and gaining comfort with the fact that the object under scrutiny is nothing more than what it's defined to be.
How has mathematics gotten so abstract? My understanding was that mathematics was abstract from the very beginning. Sure, you can say that two cows plus two more cows makes four cows, but that already is an abstraction - someone who has no knowledge of math might object that one cow is rarely exactly the same as another cow, so just assigning the value "1" to any cow you see is an oversimplification. Of course, simple examples such as this can be translated into intuitive concepts more easily, but they are still abstract.
It is abstract in the strict sense, of course. Every science is, as "abstract" simply means "not concrete". All reasoning is by definition abstract in the sense it all reasoning by definition involved concepts, and concepts are by definition abstract.
Numbers, for example, are abstract in the sense that you cannot find concrete numbers walking around or falling off trees or whatever. They're quantities abstracted from concrete particulars.
What the author is concerned with is how mathematics became so abstract.
You have abstractions that bear no apparent relation to concrete reality, at least not according to any direct correspondence. You have degrees of abstraction that generalize various fields of mathematics in a way that are increasingly far removed from concrete reality.
Right? Math is abstraction at its very core. Ridiculous premise acting as if this is anything but beyond ancient.
Mathematics arose from ancient humans need to count and measure. Even the invention\discovery of Calculus was in service to physics. It has probably only been 300 years or so since Mathematics has been symbolic, before that it was more geometric and more attached to the physical world.
Leibniz (late 1600s) helped to popularize negative numbers. At the time most mathematicians thought they were "absurd" and "fictitious".
No, not highly abstract from the beginning.
Almost from the first time people started writing about mathematics, they were writing about it in an abstract way. The Egyptians and the Babylonians kept things relatively concrete and mostly stuck to word problems (although lists of pythagorean triples is evidence for very early "number theory"), but Greece, China and India were all working in abstractions relatively early.
In particular, ancient Greek geometry at least after 300 BC proceeded from axioms, which is a central component of the abstract approach.
> Leibniz (late 1600s) helped to popularize negative numbers.
Wasn't that imaginary numbers?
Sorry what? Ancient humans invented symbols to count. How is that not symbolic?
Geometry is “attached” to the physical world… but in an abstract way… but you can point to the thing your measuring maybe so it doesn’t count…
Abstraction was perfected if not invented by mathematics.
Symbolic here refers of doing math with place holders, be it letters or something. Ancient world had notations for recording numbers. But much less so to do math with them. Say like long division.
Archimedes did Calculus before Newton.
https://en.wikipedia.org/wiki/The_Method_of_Mechanical_Theor...
He didn't connect the dots, so no he didn't do calculus even if he did some things related to it.
> My understanding was that mathematics was abstract from the very beginning.
It wasn't; but that's a common misunderstanding from hundreds of centuries of common practice.
So, how has maths gotten so abstract? Easy, it has been taken over by abstraction astronauts(1), which have existed throghout all eras (and not just for software engineering).
Mathematics was created by unofficial engineers as a way to better accomplish useful activities (guessing the best time of year to start migrating, and later harvesting; counting what portion of harvest should be collected to fill the granaries for the whole winter; building temples for the Pharaoh that wouldn't collapse...)
But then, it was adopted by thinkers that enjoyed the activity by itself and started exploring it by sheer joy; math stopped representing "something that needed doing in an efficient way", and was considered "something to think about to the last consecuences".
Then it was merged into philosophy, with considerations about perfect regular solids, or things like the (misunderstood) metaphor of shadows in Plato's cave (which people interpreted as being about duality of the essences, when it was merely an allegory on clarity of thinking and explanation). Going from an intuitive physical reality such as natural numbers ("we have two cows", or "two fingers") to the current understanding of numbers as an abstract entity ("the universe has the essence of number 'two' floating beyond the orbit of Uranus"(2)) was a consequence of that historical process, when layers upon layers of abstraction took thinkers further and further away from the practical origins of math.
[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...
[2] https://en.wikipedia.org/wiki/Hyperuranion
I think it is fair to say that it was always an abstraction. But, crucially, it was built on language as much as it was empiricism.
That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?
> That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?
Agree that math is built on language. But math is not any specific set of abstractions; time and again mathematicians have found out that if you change the definitions and axioms, you achieve a quite different set of abstractions (different numbers, geometries, infinity sets...). Does it mean that the previous math ceases to exist when you find a contradiction on it? No, it's just that you start talking about new objects, because you have gained new knowledge.
The math is not in the specific objects you find, it's in the process to find them. Rationalism consider on thinking one step at a time with rigor. Math is the language by which you explain rational thought in a very precise, unambiguous way. You can express many different thoughts, even inconsistent ones, with the same precise language of mathematics.
Agreed that we grew math to be that way. But there is an easy to trace history on the names of the numbers. Reals, Rationals, Imaginary, etc. They were largely named based on their relation to the language on how to relate them to physical things.
Proposed rule: People writing about the history of mathematics, should learn something about the history of mathematics.
Mathematicians didn't just randomly decide to go to abstraction and the foundations of mathematics. They were forced there by a series of crises where the mathematics that they knew fell apart. For example Joseph Fourier came up with a way to add up a bunch of well-behaved functions - sin and cos - and came up to something that wasn't considered a function - a square wave.
The focus on abstraction and axiomatization came after decades of trying to repair mathematics over and over again. Trying to retell the story in terms of the resulting mathematical flow of the ideas, completely mangles the actual flow of events.
I have to disagree with this. Modern (pure) mathematics is abstract and very often completely detached from practical applications because of culture and artistic inspiration. There is no "objectivity" driving modern pure mathematics. It exists mostly because people like thinking about it. Any connection to the real world is often a coincidence or someone outside the field noticing that something (really just a tiny-tiny amount) in pure maths could be useful.
> forced there by a series of crises where the mathematics that they knew fell apart
This can be said to be true of those working in foundations, but the vast majority of mathematicians are completely uninterested in that! In fact, most mathematicians today probably can't cite you the set-theoretic (or any other foundation) axioms that they use every day, if you ask them point-blank.
Yeah... The article doesn't even attempt to answer the question in its title. It's just a watered down Intro to Mathematics 101.
I think the title is a little tongue in cheek. The rest of the blog post develops the Foundations of arithmetic in a clear, well-grounded manner. This is probably a really good introduction for someone about to take a Foundations course. I say this having just Potter's "Set Theory and it's Philosophy" which covers the same material (and a lot more obviously) in 300 some pages. Another good introduction is Frederic Schuller's YouTube lectures, though already there you can start to see the over abstraction.
My mental representation of this phenomenon is like inverted Russian dolls: you start by learning the inner layers, the basics, and as you mature, you work your way into more abstractions, more unified theories, more structures, adding layers as you learn more and more. Adding difficulty but this extreme refinement is also very beautiful. When studying mathematics I like to think of all these steps, all the people, and centuries of trial and errors, refinements it took to arrive where we are now.
The French Bourbaki school certainly had a large influence on increasing abstraction in math, with their rallying cry "Down With Triangles". The more fundamental reason is that generalizing a problem works; it distills the essence and allows machinery from other branches of math to help solve it.
"A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies."
-- Stefan Banach
This article explores a particular kind of abstractness in mathematics, especially the construction of numbers and the cardinalities of infinite sets. It is all very interesting indeed.
However, the kind of abstractness I most enjoy in mathematics is found in algebraic structures such as groups and rings, or even simpler structures like magmas and monoids. These structures avoid relying on specific types of numbers or elements, and instead focus on the relationships and operations themselves. For me, this reveals an even deeper beauty, i.e., different domains of mathematics, or even problems in computer science, can be unified under the same algebraic framework.
Consider, for example, the fact that the set of real numbers forms a vector space over the set of rationals. Can it get more abstract than that? We know such a vector space must have a basis, but what would that basis even look like? The existence of such a basis (Hamel basis) is guaranteed by the axioms and proofs, yet it defies explicit description. That, to me, is the most intriguing kind of abstractness!
Despite being so abstract, the same algebraic structures find concrete applications in computing, for example, in the form of coding theory. Concepts such as polynomial rings and cosets of subspaces over finite fields play an important role in error-correcting codes, without which modern data transmission and storage would not exist in their current form.
When I was learning me a Haskell I had a great time when I realised that as long as my type was a monoid I could freely chain the operations together purely because of associativity
The definition of bijection is much more interesting than comparing cardinals. Many everyday use cases where (structure-preserving) bijections make it clear that two apriori different objects can be treated similarly.
More generally, mathematics is experimental not just in the sense that it can be used to make physical predictions, but also (probably more importantly) in that definitions are "experiments" whose outcome is judged by their usefulness.
There was a time, not that long ago in human history, that zero was "so abstract".
Sure even 500 years ago negative numbers were "absurd" in western mathematics and even in eastern mathematics where they were used they were more thought of as credits and debts than just abstract numbers.
It was a religious offense to talk about zero.
https://cambriamathtutors.com/zero-christianity/
Discussions of this sort can easily get chaotic, because people tend to conflate intuitiveness and concreteness. Sometimes the whole point of abstraction is to make a concept clearer and more intuitive. The distinction between polynomial function and polynomial is an example.
My hypothesis for this is the disconnect between mathematics and fields like physics and theoretical computer science.
We likely need new mathematics for making progress in physics or ..say.. have a better understanding of the PvsNP kind of problems, but very few high caliber mathematicians are motivated to do this.
Which makes sense, as it’s way easier and prestigious to define and solve your own abstract problems, publish one paper per grad student per year and coast through research life.
Just drop the axiom of infinity and quit whining.
https://en.wikipedia.org/wiki/Ultrafinitism
Can one do QFT in an ultrafinitistic foundations? My guess is no.
Also, I don’t think ZF sans the axiom of infinity works as an ultrafinitistic theory? It still has every natural number, just not the set of all of them.
I found it a bit ironic that the author introduced C code there as an aid, but didn't incorporate it into their argument. As I see it, code is exactly the bridge between abstract math and the empirical world - the process of writing code to implement your mathematical structure and then seeing if it gives you the output you expect (or better yet, with Lean, if it proves your proposition) essentially makes math a natural science again.
No, the correctness of your implementation is a mathematical statement about a computation running a particular computational environment, and can be reasoned about from first principles without ever invoking a computer. Whether your computation gives reasonable outputs on certain inputs says nothing (in general) about the original mathematics.
While mathematics "can" be reasoned about from first principles, the history of math is chock-full of examples of professional mathematicians convinced by unsound and wrong arguments. I prefer the clarity of performing math experiments and validating proofs on a computer.
Yes, but a C or Python program that “implements” a proof and which you test by running it on a few inputs is very different from a program in a interactive theorem prover like Rocq or Lean. In the latter, validity is essentially decided by type-checking, not execution
How has blog posts authors gotten so uneducated or/and clickbaiting?
Math in its core has always been abstract. It’s the whole point.
> Math in its core has always been abstract. It’s the whole point.
I don't think so. E.g. there may be some abstractions in numerical linear algebra, but the subject matter has always been quite concrete.
It is not a matter of what you think it is a logical fact, part of the definition if you will.
What you call concrete - were the origins of math as we know it. Geometry, astronomy, metaphysics etc they all had in common fundamental abstract thing that we call math today.
Saying “math got abstract” is like saying “a tree got wooden”. Because when it was a seed - it wasn’t yet a tree in a full sense.
Isn't this true for many other fields of study?
Given the collective time put into it, easier stuff was already solved thousands of years ago, and people are not really left with something trivial to work on. Hence focusing on more and more abstract things as those are the only things left to do something novel.
two interesting cases: convex analysis and linear algebra are both relatively easy, concrete areas of mathematics. also beautiful and unbelievably useful. yet they didn't develop until the 19th century and didn't mature until the 20th.
You are right, the low hanging fruits were picked a long time ago.
But also wrong, the easier stuff was solved INCORRECTLY thousands of years ago. But it takes advanced math to understand what was incorrect about it.
Infinity is a convenience that pays off in terseness. There's constructive mathematics, but it's wordy and has lots of cases. You can escape undecidablity if you give up infinity. Most mathematicians consider that a bad trade.
None of that was even the abstract stuff. It is all models of sizes, order, and inclusion (integers, cardinals, ordinals, sets). Not the nastier abstractions of partial orders, associativity, composition and so on (lattices, categories, ...).
And yet it all circles back.
We used Peano arithmetic when doing C++ template metaprogramming anytime a for loop from 0..n was needed. It was fun and games as long as you didn't make a mistake because the compiler errors would be gnarly. The Haskell people still do stuff like this, and I wouldn't be surprised if someone were doing it in Scala's type system as well.
Also, the PLT people are using lattices and categories to formalize their work.
It's always been abstract. They'll say to me, "Give me a concrete example with numbers!"
I get what they're saying in practice. But numbers are abstract. They only seem concrete because you'd internalized the abstract concept.
"Indeed, persistently trying to relate the foundations of math to reality has become the calling card of online cranks." <-- Hm??? I'm getting self-conscious. Details?
>Next, consider the time needed for Achilles to reach the yellow dot; once again, by the time he gets there, the turtle will have moved forward a tiny bit. This process can be continued indefinitely; the gap keeps getting smaller but never goes to zero, so we must conclude that Achilles can’t possibly win the race.
Am i daft, eventually (Very soon) Achilles would over take the turtles position regardless of how far it moved... I am missing something?
you're not, the proof is a famous error known as zenos paradox. Its only an apparent paradox, and indeed it's been disproven by observing that things do in fact move
I like the humourous way of putting it, but of course Zeno and his contemporaries knew that things moved - that's exactly why this seemed to be a paradox. Seemingly secure reasoning results in a conclusion that's obviously false.
To resolve the paradox, you have to show what's wrong with the reasoning, not just observe the obviously false conclusion.
Wow this is some serious over complication. How can anyone mix Philosophy and Mathematics? They are not even in the same ball park.. Even with infinity. Its just something that cant be understood in the mind, IMHO.
I once walked around Westwood with my future wife telling her all about Karp-reductions of NP-complete problems. Somehow we now have four kids.
One could also say the opposite. It's not abstract at all, just a set of rules and their implications. Plausibly the least abstract thing there is.
On the other hand, two cookies plus three cookies, what even is a cookie? What if they're different sizes? Do sandwich cookies count as one or two? If you cut one in half, does you count it as two cookies now? All very abstract. Just give me some concrete definitions and rules and I'll give you a concrete answer.
I used to be a physicist and I love math for the toolbox it provides (mostly Analysis). It allows to solve a physical model and make predictions.
When I was studying, I always got top marks in Analysis.
Then came Algebra, Topology and similar nightmares. Oh crap, that was difficult. Not really because of the complexity, but rather because of abstraction, an abstraction I could not take to physics (I was not a very good physicist either). This is the moment I realized that I will never be "good in maths" and that will remain a toolbox to me.
Fast forward 30 years, my son has differentials in high school (France, math was one of his "majors").
He comes to me to ask what the fuck it is (we have a unhealthy fascination for maths in France, and teach them the same was as in 1950). It is only when we went from physical models to differentials that it became clear. We did again the trip Newton did - physics rocks :)
I feel like a great deal more credit should be given to Cauchy and his school, but I understand the tale is long enough.
The Peano axioms are pretty nifty though. To get a better appreciation of the difficulty of formally constructing the integers as we know them, I recommend trying the Numbers Game in Lean found here: https://adam.math.hhu.de/
I believe that abstraction is recursive in nature which creates multiple layers of abstract ideas leading to new areas or insights. For instance our understanding of continuity and limit led to calculus, which when tied to the (abstract) idea of linearity led to the idea of linear operator which explains various phenomena in the real world surprisingly well.
You could say that abstraction is a step or a ladder: by climbing on an abstraction you can see new goals and opportunities, possibly out of reach until you build yet new steps.
This article can also be written as "The unreasonable effectiveness of abstraction in mathematics."
The number 1 is what a cow, a fox, a stone ... have in common, oneness. Mathematics is abstraction, written down.
That's not obvious.
- they are material objects
- they are concepts I understand
- they are sequences of letters
- they are English words
- ...
Not sure why oneness is privileged as what they have in common, and their oneness is meaningless by itself. Oneness is a property that is only meaningful in relation to other concepts of objects.
A rock is not physically a material object, it is a region of space where the electrons, protons and neutrons are differently arranged, and that region is fuzzy, difficult to determine; but as physical beings, as monkeys, we recognise its oneness, that's necessary for our survival in this physical world, we see this blurred outline of a rock, we feel it's weight in our hand, we observe its practical difference from two rocks. Just as we recognise twoness in a pair of rocks, fish, apples, threeness in a triple of parrots, of carrots, we abstract those out into 1, 2, 3, ...
I like Peano, but he was using Grassmann's definition of natural numbers
What else is it supposed to do?
I think this is a really good question, and the answer might be that ideally you move up and down the ladder of abstraction, learning from concrete examples in some domains, then abstracting across them, then learning from applying the abstractions, then abstracting across abstractions, then cycling through the process.
Unlike Zeno's famous example the paradox which does better at explaining the problem is https://en.wikipedia.org/wiki/Coastline_paradox which Mandelbrot seemed particularly keen on.
The tendency towards excessive abstraction is the same as the use of jargon in other fields: it just serves to gatekeep everything. The history of mathematics (and science) is actually full of amateurs, priests and bored aristocrats that happened to help make progress, often in their spare time.
Complaining about jargon is lazy. Most communications about complicated things are not aimed at the layman, because to do anything useful with the complicated things, you tend to have to understand a fair amount of the context of the field. Once you're committed to actually learning about the field, the jargon is the easiest part: they're just words or phrases that mean something very specific.
To put it another way: Jargon is the source code of the sciences. To an outsider, looking in on software development, they see the somewhat impenetrable wall of parentheses and semicolons and go "Ah, that's why programming is hard: you have to understand code". And I hope everyone here can understand that that's an uninformed thing to say. Syntax is the easy part of programming, it was made specifically to make expressing the rigorous problem solving easier. Jargon is the same way: it exists to make expressing very specific things that only people in this subfield actually think about easier, instead of having to vaguely gesture at the concept, or completely redefine it every time anybody wants to communicate within the field.
Abstraction isn't to gatekeep; it's to increase the utility. It's the same as "dependency inversion" in programming: do your logic in terms of interfaces/properties, not in terms of a particular instance. This makes reasoning reusable. It also often makes things clearer by cutting out distracting details that aren't related to the core idea.
People are aware that you need context to motivate abstractions. That's why we start with numbers and fractions and not ideals and localizations.
Jargon in any field is to communicate quickly with precision. Again the point is not to gatekeep. It's that e.g. doctors spend a lot of time talking to other doctors about complex medical topics, and need a high bandwidth way to discuss things that may require a lot of nuance. The gatekeeping is not about knowing the words; it's knowing all of the information that the words are condensing.
Theirs no such thing as excessive abstraction in math, because abstraction is the point. Is category theory “excessive abstraction” in your opinion?
> because abstraction is the point.
Formal reasoning is the point, which is not by itself abstraction.
Someone else in this discussion is saying Euclid's Elements is abstract, which is near complete nonsense. If that is abstract our perception of everything except for the fundamental [whatever] we are formed of is an abstraction.
> Formal reasoning is the point, which is not by itself abstraction.
What do you think "formal" means in that sentence.
It means "formal" from the word "form". It is reasoning through pure manipulation of symbols, with no relation to the external world required.
I love how you lot just redefine words to suit your purpose:
https://www.etymonline.com/word/formal "late 14c., "pertaining to form or arrangement;" also, in philosophy and theology, "pertaining to the form or essence of a thing," from Old French formal, formel "formal, constituent" (13c.) and directly from Latin formalis, from forma "a form, figure, shape" (see form (n.)). From early 15c. as "in due or proper form, according to recognized form," As a noun, c. 1600 (plural) "things that are formal;" as a short way to say formal dance, recorded by 1906 among U.S. college students."
There's not a much better description of what Euclid was doing.
I am not, this is what formal logic and formal reasoning means:
https://plato.stanford.edu/entries/logic-classical/
"Formal" in logic has a very precise technical meaning.
What you mean is someone has redefined the word to suit their purpose, which is precisely what I pointed out at the top.
Edit to add: this comment had a sibling, that was suggesting that given a specific proof assistant requires all input to be formal logic perhaps the word formal could be redefined to mean that which is accepted by the proof assistant. Sadly this fine example of my point has been deleted.
Every mathematician understands what a formal proof is. Ditto a formal statement of a mathematical or logical proposition. The mathematicians of 100 years ago also all understood, and the meaning hasn't changed over the 100 years.
> The mathematicians of 100 years ago also all understood, and the meaning hasn't changed over the 100 years.
Isn't that the subject of the whole argument? That mathematicians have taken the road off in a very specific direction, and everyone disagreeing is ejected from the field, rather like occurred more recently in theoretical physics with string theory.
Prior to that time quite clearly you had formal proofs which do not meet the symbolic abstraction requirements that pure mathematicians apparently believe are axiomatic to their field today, even if they attempt to pretend otherwise, as argued over the case of Euclid elsewhere. If the Pythagoreans were reincarnated, as they probably expected, they would no doubt be dismissed as crackpots by these same people.
Not all proofs are formal, and most published papers are not formal in the strictest sense. That is why they talk about "formalizing" a proof if there is some question about it. It is that formalization process which often finds flaws.
>quite clearly you had formal proofs which do not meet the symbolic abstraction requirements
I've been unable to imagine or recall an example. Can you provide one?
No, abstraction is the point and formal reasoning is a tool. And yes, what Euclid did is obviously abstraction, I don’t know why so you consider this stance nonsense.
Can you say how mathematics is inherently abstract in a way consistent with your day-to-day life as a concrete person? Or is your personhood also an abstraction?
I could construct a formal reasoning scheme involving rules and jugs on my table, where we can pour liquids from one to another. It would be in no way symbolic, since it could use the liquids directly to simply be what they are. Is constructing and studing such a mechanism not mathematics? Similarly with something like musical intervals.
Of course I can. I frequently use numbers which are great abstraction. I can use same number five to describe apples, bananas and everything countable.
> to describe apples, bananas and everything countable
An apple is an abstraction over the particles/waves that comprise it, as is a banana.
Euclid is no more abstract than the day to day existence of a normal person, hence to claim that it is unusually abstract is to ignore, as you did, the abstraction inherent in day to day life.
As I pointed out it's very possible to create formal reasoning systems which are not symbolic or abstract, but due to that are we to assume constructing or studying them would not be a mathematical exercise? In fact the Pythagoreans did all sorts of stuff like that.
> An apple is an abstraction over the particles/waves that comprise it, as is a banana.
No, you don’t understand what abstraction is. Apple is exactly arrangement of particles, it’s not abstraction over them.
> hence to claim that it is unusually abstract
Who talks about him being unusually abstract (and not just abstract)?
> is to ignore, as you did, the abstraction inherent in day to day life.
How am I ignoring this abstraction when I’ve provided you exactly that (numbers are abstraction inherent in day to day life). I’m sorry but you seem to be discussing in bad faith.
> Apple is exactly arrangement of particles, it’s not abstraction over them.
No. You can do things to that apple, such as bite it, and it is still an apple, despite it now having a different set of particles. It is the abstract concept of appleness (which we define . . . somehow) applied to that arrangement of particles.
> I’m sorry but you seem to be discussing in bad faith.
Really?
> No, you don’t understand what abstraction is.
I believe mathematics was much tamer before Georg Cantor's work. If I had to pick a specific point in history when maths got "so abstract", it would be the introduction of axiomatic set theory by Zermelo.
I personally cannot wrap my head around Cantor's infinitary ideas, but I'm sure it makes perfect sense to people with better mathematical intuition than me.
I wish the scroll bar was a little less invisible.
[flagged]
I'm curious how you managed to find nothing on lcamtuf. He's one of the most famous Polish hackers from the 90s, then one the best security researchers Google had. Even if you live under a rock, the substack has an "about" section. If it wasn't for Michał I'd probably be a farmer today.
Did you bother to google his handle? While I don't know his pure mathematics credentials, he's nerd-famous enough to not warrant an introduction. In fact, you not recognizing it says something about you.
>he's nerd-famous enough to not warrant an introduction
What is nerd-famous supposed to be. He's at the center of some subjective in-group that exists in your head?
To be fair, we are on hacker news. I did once use on of his programs, American Fuzzy Lopper (fake advertisement lawsuit incoming if its not american). So he is not nobody apparently
He wrote the American Fuzzy Lop fuzzer, which was extremely influential – pretty much put fuzzing on the map.
Could be him? https://en.wikipedia.org/wiki/Micha%C5%82_Zalewski
https://en.wikipedia.org/wiki/Micha%C5%82_Zalewski