It's probably worth adding the context that Wildberger's agenda is to ground mathematics in integers and rational numbers, eliminating those pesky irrationals Euclid introduced, because reasoning about them invariably involves infinities or universal quantifiers, which everyone agrees are tricky and error-prone, even if they don't agree with Wildberger's radical variety of finitism. So he was delighted to find a kindred spirit millennia ago in the Plimpton 322 scribe and, presumably, the entire Babylonian mathematical tradition.
Thanks for the context - I was baffled at first how the Guardian would run with the tagline "a trignometric table more accurate than any".
But it's because the sine of 60 degrees is said by modern tables to be equal to sqrt(3) / 2, which Wildberger doesn't "believe in", he prefers to state that the square of the sine is actually 3 / 4 and that this is "more accurate".
Well, no, if you look at a trigonometric table, it doesn't say sin 60° = √3/2, because that isn't a useful value for calculation. It'll say something like 0.866025. But that has an error of a little more than 0.0000004. Instead Wildberger prefers saying that the spread (sin²) is ¾, which has no error. It is more accurate. There's no debate about this, except from margalabargala.
The news from this paper (thanks for the link!) is that evidently the Babylonians preferred that, too. Surely Pythagoras would have.
But how do you actually do anything useful with this ratio ¾? Like, calculating the height of a ziggurat of a given size whose sides are 60° above the horizontal? Well, that one in particular is pretty obvious: it's just the Pythagorean theorem, which lets you do the math precisely, without any error, and then at the end you can approximate a linear result by looking up the square root of the "quadrance" in a table of square roots, which the Babylonians are already known for tabulating.
For more elaborate problems, well, Wildberger wrote the book on that. Presumably the Babylonians had books on it too.
> √3/2 … that isn't a useful value for calculation
Some tables do indeed have that value and it is a very useful value for calculation, one that can be symbolically manipulated to get you an exact number (albeit one likely expressed in radicals) for your work. When I used to teach algebra, it was a struggle to get students to let go of the decimal approximations that came out of their calculators and embrace expressions that weren’t simple decimals but were exact representations of the numbers at hand. (Then there’s really fun things like the fact that, e.g., √2 + √3 can also be written as √(5+2√6) (assuming I didn’t make an arithmetic error there)).
What do those tables say for 59°59'? I'm skeptical that what you're looking at is, strictly speaking, a trigonometric table.
If you want to know how many courses of bricks your ziggurat is going to need, given that the base is 400 cubits across and there are 10 courses of bricks per cubit, you're going to have to round 2000√3/2 to an integer. You can do that with a table of squares, or you can use a decimal (or sexagesimal) fraction approximation, and I guess you're right that it isn't clear that one is necessarily better than the other.
Incidentally, the fact that we write things like 59°59'30" comes about because the Babylonians at least weren't using Wildberger's "spreads" all the time.
Those tables don’t give a value for that. They only give values for angles whose trigonometric values can be expressed in terms of rational expressions with radicals (and angles as rational expressions in terms of π).
That sounds like a sort of "trigonometric table" you couldn't use in practice for calculation. You need to be able to look up whatever value you measure with your sextant or theodolite to within its measurement precision. You seem to be using the term "trigonometric table" in a way conflicting with the standard use explained in https://en.wikipedia.org/wiki/Trigonometric_tables perhaps as a sort of pun or joke, or perhaps as a sort of act of activism against hegemonic notions of trigonometry.
> But it's because the sine of 60 degrees is said by modern tables to be equal to sqrt(3) / 2, which Wildberger doesn't "believe in", he prefers to state that the square of the sine is actually 3 / 4 and that this is "more accurate".
Personally I don't believe in either value. I prefer to state that the sine of 60 degrees is 2.7773. I believe that is more accurate.
Thank you for this expansion. I was about to rabbit hole on how it could be that ratio-based trig (and what is that?) is more accurate than modern calculations.
Re: rationals, I mean there's an infinite number of rationals available arbitrarily near any other rational, that has to mean they are good enough for all practical purposes, right?
> that has to mean they are good enough for all practical purposes, right?
For practical purposes, they’re bad. Denominators tend to explode when you do a few operations (for example 11/123 + 3/17 = 556/2091), and it’s not easy to spot whether you can simplify results. 12/123 + 3/17 = 191/697, for example.
You can counteract things by ‘rounding’ to fractions with denominators below a given limit (say 1000) but then, you likely are better of with reckoning with a fixed denominator that you then do not have to store with each number, allowing you to increase the maximal denominator.
For example (https://en.wikipedia.org/wiki/Farey_sequence), there are 965 rational fractions in [0,1] with denominator at most 10 (https://oeis.org/A005728/list), so storing one requires just under 10 bits. If you use the fractions n/964 for 0 ≤ n ≤ 964 as your representable numbers, arithmetic becomes easier.
That "density" is how Euclid defined the irrational real numbers in terms of the rationals; his definition, cast into modern language by Dedekind, is what we normally use today.
You don't. He basically defines numbers like pi and e not as numbers, but as iterative functions, which you can run to whatever level of accuracy that you want. It's sort of a silly argument, because _all_ numbers can be treated like the output of a function, including the real numbers, so he has basically smuggled in all reals through the back door, because any real number can just be thought of as a function with increasingly precise return values with an infinitely long description, just like pi is.
You can't get all the reals that way. The reals that can be produced by an algorithm make up a vanishingly small (e.g. countable) subset. Almost all of the reals are inexpressible.
What I described isn't really an algorithm, it's just taking the digits of a number, let's say:
foo=3.14159265...
Where after 5 is some continuing sequence of decimals.
The series of functions is literally just:
foo(0) = 3
foo(1) = 3.1
foo(2) = 3.14...
And to be clear, it's not just like, an algorithm that estimates pi, it's literally just a list of return values that is infinitely long that return more and more digits of whatever the number is. That is actually how he defines pi.
pi _happens_ to be computable, and there are more efficient functions that will produce those numbers, but you could do the same thing with an incomputable number, you just need a definition for the number which is infinitely long.
To be clear, I don't think any of this is a good idea, just pointing out that if he's going to allow that kind of definition of pi (ie, admit a definition that is just an infinite list of decimal representations), you can just do the same thing with any real number you like. He of course will say that he's _not_ allowing any _infinite list_, only an arbitrary long one.
That's the key point though, this list isn't infinitely long, and all the numbers in it are rational. And it is an algorithm (specifically, a lookup table).
All the numbers you get this way are going to be rational, and if you require them to be finite, you can't even identify them with any irrational numbers. At least with the computable numbers you get an infinite set of irrational numbers along with the rationals, while still never touching the vast majority of all numbers (the remaining, incomputable irrationals).
Right, but to be clear, it's not that ultrafinitists like Wildberger believe that they can express all the real numbers; rather, they believe that those inexpressible real numbers don't actually exist.
How does that work for calculus which regularly looks at the limits of functions as x approaches infinity and has very real real world applications that stem from such algorithms?
Math in general needs to have a big blinking "don't confuse the map for the territory" label on it.
E.g. when you calculate the area of a plot of land do you take into account the curvature of the Earth? You have to make a bunch of compromises in the first place to even talk about what the area of a plot land means.
Math is a bunch of useful systems that we humans have devised. We tend to gravitate towards the ones that help us describe and predict things in the real world.
But there is plenty of math which doesn't do either. It's just as real as the math that does.
I agree. I’m just trying to speak to the “realist” argument that Wilderberg presents claiming that some numbers aren’t real and there’s no point talking about them when they come from very practical mathematics let alone the ones that aren’t. In no way was I trying to claim that some part of maths aren’t real - I was trying to understand the consistency of what to me seems like a confusing argument to make.
I don't think uncomputable numbers "come from very practical mathematics"! Rather, they come from Gödel, Church, and Turing demolishing Hilbert's program of solving the Entscheidungsproblem once and for all. Possibly, if Hilbert had succeeded, that would have made it "very practical mathematics", or possibly not, but that counterfactual is reasoning from a logical contradiction.
I never said they were, but could've sworn that the Wikipedia page or parent comment did (I can't find it now and am questioning my sanity). I couldn't understand how he could try to get rid of them, although this isn't surprising as mathematics is basically magic to me once you get past calculus. I guess this is only about removing irrationals though.
It depends on the particular construction. You could construct the "complex rational field" by adding i to the rationals with the rule i^2 = -1. That seems to be what Wildberger is ok with. The standard complex numbers involve adding i to the real numbers, which Wildberger doesn't like.
I don't know how you'd do electrical engineering with the rational complex field, because electrical engineering and physics in general involves a lot of irrational quantities and calculus, and the standard foundations of these concepts use real numbers.
It's really up to finitists to show that there are problems with these methods and that they have a better way of doing things, because so far the standard way seems to work very well.
Electricity is not standing by, it is malevolently trying to burn out your equipment. If you allow your imagination run too far it’ll heat up your equipment and burn it out. You need to increase your capacity to keep your imagination in check.
To defend Wildberger a bit (because I am an ultrafinitist) I'd like to state first that Wildberger has poor personal PR ability.
Now, as programmers here, you are all natural ultrafinitists as you work with finite quantities (computer systems) and use numerical methods to accurately approximate real numbers.
An ultrafinitist says that that's really all there is to it. The extra axiomatic fluff about infinities existing are logically unnecessary to do all the heavy lifting of the math that we are familiar with. Wildberger's point (and the point of all ultrafinitist claims) is that it's an intellectual and pedagogical disservice to teach and speak of, e.g. Real Numbers, as if they're actually involving infinite quantities that you can never fully specify. We are always going to have to confront the numerical methods part, so it's better to make teaching about numbers methodologically aligned with how we actually measure and use them.
I have personally been working on building various finite equivalents to familiar math. I recommend anyone to read Radically Elementary Probability Theory by Nelson to get a better sense of how to do finite math, at least at the theoretical level. Once again, on a practical level to do with directly computing quantities, we've only ever done finite math.
I like to imagine at the end of the human race when the sun explodes or whatever, some angelic being will tally up all the numbers ever used by humans and confirm that there are only finitely many of them. Then they'll chalk a tally on the scoreboard in favor of the ultrafinitists.
As long as someone isn't a crank (e.g. they aren't creating false proofs) I enjoy the occasional outsider.
An eventual output of a calculation has to be a finite result, but the concepts that we use to get there are often not.
The standard way of setting up calculus involves continous magnitudes, hence irrational quantities, and obviously that's used all over physics and there doesn't seem to be a problem with it.
I think to make a compelling case for a finitist foundation for maths you would at the least have to construct all of the physically useful maths on a finitist basis.
Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.
> Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.
Well these are probably easy to find even now? E.g the Banach-Tarsky paradox is unlikely to be provable in finitist math which is somewhat of an improvement.
I was thinking more about applications in physics where calculus and irrational quantities are used all the time.
At more advanced levels the theories are based on differential geometry and operators on Hilbert space. I'm not sure if fully worked out finitist versions of these even exist. Where finitist versions do exist, they're often technically more difficult to use than the standard versions, which is the opposite of an improvement in my view.
Whether it's undesirable for your mathematical foundation to prove the Banach-Tarski paradox is debatable. It's counter-intuitive, but doesn't lead to contradictions, as far as is known. It doesn't apply to physics because the construction uses non-measurable sets.
I'm not a finitist myself but my understanding is that it has to do as much with physics as does ZFC, which is very little. The math used in physics works on practice and did work long before the question of foundations even came up.
The problem that bothers some mathematicians is that despite working well math still lacks a solid foundation. Furthermore it's basically proven that these foundations can't even exist, or at least for the mainstream version of math. This is where non-mainstream versions pop up. The denial of uncountable sets does help you resolve some of the paradoxes. Not all unfortunately, even the countable sets already lead to things like incompleteness theorems. Well, one can dream.
>An eventual output of a calculation has to be a finite result, but the concepts that we use to get there are often not.
This is so true but it can be good if you're flexible enough to try it either way.
With massive tables of physical properties officially produced by pages of 32-bit Fortran it really did look like floating-point was ideal at first. Because it worked great.
The algorithm had been stored as a direct mathematical equation, plain as day, exactly as deduced with constants and operations in 32-bit double-precision floating point.
But when the only user-owned computers were still just 8-bit machines, there was no way to reproduce the exact results across the entire table to the same number of significant figures, using floating point.
Since it's a table it is of course not infinite, and a matrix to boot. A matrix of real numbers across an entire working spectrum.
The algorithm takes a set of input values, calculates results as defined, and rounds it off repeatably in the subsequent logic before output, so everyone can get agreement. The software OTOH takes a range of input values and outputs a matrix. And/or retains a matrix in "imaginary" spreadsheet form for later use :)
Every single value in the matrix is a floating-point representation of a real number, but they are rounded off as precisely as possible to the "exact" degree of usefulness, making them functionally all finite values in the end. This took a lot of work from top mathematicians, computer scientists, and engineers. And as designed, the matrix then carries the algorithm on its own without reference to the fundamental equation.
The solution turned out to involve working backward from the matrix reiteratively until an alternate algorithm was found using only integers for values and operations, up until the final rounding and fixed.point representation at the end. Dramatically unrecognizable algorithm but it worked and only took 0.5 kilobytes of 8-bit Basic code which was a fraction of the original Fortran.
This time the feature that showed up without having to make extra effort was the property of being more precise based directly on increased bitness of the computer, without need for floating-point at all. Of course the Fortran code accomplished this too by the wise use of floating-point but it took a lot bigger iron to do so. And wasn't going to be battery powered any time soon way back then.
>somehwere this finitist foundation disagrees with the results obtained by the standard foundation,
>there's no reason to think the standard foundation is in error.
This is "exactly" how it was. There were disagreements all over the place but they were in further decimal places not representable by the table. The standard was an international standard having carefully agreed-upon accuracy & precision, as defined by the Fortran which really worked and was then written in stone, with any nonmatched output being a notable failure.
Topology, i.e. the analysis of connectivity, is built upon the notion of continuity and infinite divisibility, which seems to be difficult to handle in an ultrafinitist way.
Topology is an exceedingly important branch of mathematics, not only theoretically (I consider some of the results of topology as very beautiful) but also practically, as a great part of the engineering design work is for solving problems where only the topology matters, not the geometry, e.g. in electronic schematics design work.
So I would consider any framework for mathematics that does not handle well topology as incomplete and unusable.
Ultrafinitist theories may be interesting to study as an alternative, but the reality is that infinitesimal calculus in its modern rigorous form does not need any alternatives, because it works well enough and until now I have not seen alternatives that are simpler, but only alternatives that are more complicated, without benefits sufficient to justify that.
I also wonder what ultrafinitists do about projective geometry and inversive geometry.
I consider projective geometry as one of the most beautiful parts of mathematics. When I encountered it for the first time when very young, it was quite a revelation, due to the unification that it allows for various concepts that are distinct in classic geometry. The projective geometry is based on completing the affine spaces with various kinds of subspaces located at an "infinite" distance.
Without handling infinities, and without visualizing how various curves located at infinity look like (as parts of surfaces that can be seen at finite distances), projective geometry would become very hard to understand, even if one would duplicate its algorithms while avoiding the names related to "infinity".
Similarly for inversive geometry, where the affine spaces are completed with points located at "inifinity".
Such geometries are beautiful and very useful, so I would not consider as usable a variant of mathematics where they are not included.
I don't know about Wildberger specifically, but an interesting point is that only a countable subset of real numbers can be described like that. Polynomials with integers coefficients are countable and so are their roots, which means almost all real numbers are transcendental.
We think we study the real numbers but it seems we can't even have a system to express them. And indeed, that's not even a limitation of algebraic systems: any notation over a finite alphabet can only express a countable set of distinct objects which amounts to nothing when real numbers are concerned.
I'm not a finitist, but I do find it curious that we approach mathematics by inventing a more-than-infinite set of objects that's impossible to fully grasp. I don't see it as a bad thing though, I also love Complex Analysis and many people (and some mathematicians even) denounce them for being imaginary. My impression is that transcendental numbers are as imaginary as are imaginary numbers, it's just we don't notice. And they're obviously still useful as are the complex numbers.
A long time has passed since the paradoxes of Zeno of Elea, so now there really is no reason for not accepting that space is infinitely divisible.
The error of Zeno of Elea was that he did not understand the symmetry between zero and infinity (or he pretended to not understand it).
Because of this error, Zeno considered that infinity is stronger than zero, so he believed or pretended to believe that zero times infinity is infinity, instead of recognizing that zero times infinity can be any number and also zero or infinity.
For now, there exists no evidence whatsoever that the physical space and time are not infinitely divisible.
Even if in the future it would be discovered that space and time have a discrete structure, the mathematical model of an infinitely divisible space and time would remain useful as an approximation, because it certainly is simpler than whatever mathematical model would be needed for a discrete space and time.
> For now, there exists no evidence whatsoever that the physical space and time are not infinitely divisible.
What is your evidence for it? You want to make a claim about something being infinite, it is up to you to provide evidence.
> recognizing that zero times infinity can be any number and also zero or infinity.
This statement makes no sense in formal mathematics. Multiplication is a function, which means for each set of inputs there is one output. I imagine you are trying to say something about limits here, but the language you are using is very imprecise.
Read Wildberger if you want to know what he thinks.
I can tell you that it is the output of a function, not a distinct entity that exists on its own independently of the computation.
The whole point is that as a theory for the foundations of mathematics, you do not need to assume numbers with infinitely long decimal expansions in order to do math.
> I can tell you that it is the output of a function, not a distinct entity that exists on its own independently of the computation.
Could you elaborate? What is the output of that function if not an entity in it's own? Having studied math with philosophiy minor long time ago I am curious.
It's part of a dependency relation, the function computes and produces an output that we call sqrt(2).
On the other hand, using the axioms of ZFC, one can say any real number exists without having a function to compute it, or a proof to construct it.
For an ultrafinitist, or any finitist for that matter, we say that you only need the minimum of ingredients to produce math -- you do not need to assume anything over and above that, as it's not even helpful in the verification process.
So assuming only finitely many symbols and finitely many numbers, I can produce what we call sqrt(2). We only ever verify it numerically and finitely anyways. We can never reach decimals at infinite ordinals.
So it makes no sense to say, "Hey I assume transfinitely many entities, and my assumption says these numbers exist even though the proofs and decimal expansions are only ever finite."
> numbers methodologically aligned with how we actually measure and use them.
We use numbers in compact decimal approximations for convenience. Repeated rational series are cumbersome without an electronic computer and useless for everyday life.
Robson's argument is that it isn't a trig table in the modern sense and was probably constructed as a teacher's aide for completing-the-square problems that show up in Babylonian mathematics. Other examples of teaching-related tablets are known to exist.
On a quick scan, it looks like the Wildberger paper cites Robson's and accepts the relation to the completing-the-square problem, but argues that the tablet's numbers are too complex to have been practical for teaching.
>”He bought it from Edgar Banks, a diplomat, antiquities dealer and flamboyant amateur archaeologist said to have inspired the character of Indiana Jones – his feats included climbing Mount Ararat in an unsuccessful attempt to find Noah’s Ark – who had excavated it in southern Iraq in the early 20th century.”
A little off-topic, but as a non native English speaker this sentence in the article made me look up whether there’s scientific consensus that Noah’s Ark has been found and I’d just never heard about it. Turns out there isn’t, and the end of the sentence actually refers to the tablet. Was still a fun rabbit hole to go down.
It's probably worth adding the context that Wildberger's agenda is to ground mathematics in integers and rational numbers, eliminating those pesky irrationals Euclid introduced, because reasoning about them invariably involves infinities or universal quantifiers, which everyone agrees are tricky and error-prone, even if they don't agree with Wildberger's radical variety of finitism. So he was delighted to find a kindred spirit millennia ago in the Plimpton 322 scribe and, presumably, the entire Babylonian mathematical tradition.
cf. https://en.wikipedia.org/wiki/Divine_Proportions:_Rational_T...
Thanks for the context - I was baffled at first how the Guardian would run with the tagline "a trignometric table more accurate than any".
But it's because the sine of 60 degrees is said by modern tables to be equal to sqrt(3) / 2, which Wildberger doesn't "believe in", he prefers to state that the square of the sine is actually 3 / 4 and that this is "more accurate".
The actual paper is at [1]:
[1] https://doi.org/10.1016/j.hm.2017.08.001
Well, no, if you look at a trigonometric table, it doesn't say sin 60° = √3/2, because that isn't a useful value for calculation. It'll say something like 0.866025. But that has an error of a little more than 0.0000004. Instead Wildberger prefers saying that the spread (sin²) is ¾, which has no error. It is more accurate. There's no debate about this, except from margalabargala.
The news from this paper (thanks for the link!) is that evidently the Babylonians preferred that, too. Surely Pythagoras would have.
But how do you actually do anything useful with this ratio ¾? Like, calculating the height of a ziggurat of a given size whose sides are 60° above the horizontal? Well, that one in particular is pretty obvious: it's just the Pythagorean theorem, which lets you do the math precisely, without any error, and then at the end you can approximate a linear result by looking up the square root of the "quadrance" in a table of square roots, which the Babylonians are already known for tabulating.
For more elaborate problems, well, Wildberger wrote the book on that. Presumably the Babylonians had books on it too.
> √3/2 … that isn't a useful value for calculation
Some tables do indeed have that value and it is a very useful value for calculation, one that can be symbolically manipulated to get you an exact number (albeit one likely expressed in radicals) for your work. When I used to teach algebra, it was a struggle to get students to let go of the decimal approximations that came out of their calculators and embrace expressions that weren’t simple decimals but were exact representations of the numbers at hand. (Then there’s really fun things like the fact that, e.g., √2 + √3 can also be written as √(5+2√6) (assuming I didn’t make an arithmetic error there)).
What do those tables say for 59°59'? I'm skeptical that what you're looking at is, strictly speaking, a trigonometric table.
If you want to know how many courses of bricks your ziggurat is going to need, given that the base is 400 cubits across and there are 10 courses of bricks per cubit, you're going to have to round 2000√3/2 to an integer. You can do that with a table of squares, or you can use a decimal (or sexagesimal) fraction approximation, and I guess you're right that it isn't clear that one is necessarily better than the other.
Incidentally, the fact that we write things like 59°59'30" comes about because the Babylonians at least weren't using Wildberger's "spreads" all the time.
Those tables don’t give a value for that. They only give values for angles whose trigonometric values can be expressed in terms of rational expressions with radicals (and angles as rational expressions in terms of π).
That sounds like a sort of "trigonometric table" you couldn't use in practice for calculation. You need to be able to look up whatever value you measure with your sextant or theodolite to within its measurement precision. You seem to be using the term "trigonometric table" in a way conflicting with the standard use explained in https://en.wikipedia.org/wiki/Trigonometric_tables perhaps as a sort of pun or joke, or perhaps as a sort of act of activism against hegemonic notions of trigonometry.
What does Wildberger then think about i = sqrt(-1)? Is this also "not accurate" enough?
Ultrafinitism does not rule out higher algebraic structures
> But it's because the sine of 60 degrees is said by modern tables to be equal to sqrt(3) / 2, which Wildberger doesn't "believe in", he prefers to state that the square of the sine is actually 3 / 4 and that this is "more accurate".
Personally I don't believe in either value. I prefer to state that the sine of 60 degrees is 2.7773. I believe that is more accurate.
Thank you for this expansion. I was about to rabbit hole on how it could be that ratio-based trig (and what is that?) is more accurate than modern calculations.
Re: rationals, I mean there's an infinite number of rationals available arbitrarily near any other rational, that has to mean they are good enough for all practical purposes, right?
> that has to mean they are good enough for all practical purposes, right?
For practical purposes, they’re bad. Denominators tend to explode when you do a few operations (for example 11/123 + 3/17 = 556/2091), and it’s not easy to spot whether you can simplify results. 12/123 + 3/17 = 191/697, for example.
You can counteract things by ‘rounding’ to fractions with denominators below a given limit (say 1000) but then, you likely are better of with reckoning with a fixed denominator that you then do not have to store with each number, allowing you to increase the maximal denominator.
For example (https://en.wikipedia.org/wiki/Farey_sequence), there are 965 rational fractions in [0,1] with denominator at most 10 (https://oeis.org/A005728/list), so storing one requires just under 10 bits. If you use the fractions n/964 for 0 ≤ n ≤ 964 as your representable numbers, arithmetic becomes easier.
That "density" is how Euclid defined the irrational real numbers in terms of the rationals; his definition, cast into modern language by Dedekind, is what we normally use today.
How do we do things like electrical engineering without imaginary numbers? Is this method an actual improvement?
Imaginary numbers, quaternions, octonions, Clifford Algebras, etc. can still have finite expressions.
After all, the Cayley-Dickson construction is not an infinite affair.
imaginary numbers are not the same thing as irrational numbers
He doesn't work with imaginary numbers, either. He treats complex numbers as matrices of rationals.
Which is the same thing for all intents and purposes.
An ultrafinitist is still allowed to call that 'i'.
Still kind of freaked out that a Möbius transform can be expressed as a matrix multiplication.
How do you rationalize pi or e?
You don't. He basically defines numbers like pi and e not as numbers, but as iterative functions, which you can run to whatever level of accuracy that you want. It's sort of a silly argument, because _all_ numbers can be treated like the output of a function, including the real numbers, so he has basically smuggled in all reals through the back door, because any real number can just be thought of as a function with increasingly precise return values with an infinitely long description, just like pi is.
You can't get all the reals that way. The reals that can be produced by an algorithm make up a vanishingly small (e.g. countable) subset. Almost all of the reals are inexpressible.
What I described isn't really an algorithm, it's just taking the digits of a number, let's say:
foo=3.14159265...
Where after 5 is some continuing sequence of decimals.
The series of functions is literally just:
foo(0) = 3 foo(1) = 3.1 foo(2) = 3.14...
And to be clear, it's not just like, an algorithm that estimates pi, it's literally just a list of return values that is infinitely long that return more and more digits of whatever the number is. That is actually how he defines pi.
https://youtu.be/lcIbCZR0HbU?si=3YxcHfPlCFrlr5h3&t=2080
pi _happens_ to be computable, and there are more efficient functions that will produce those numbers, but you could do the same thing with an incomputable number, you just need a definition for the number which is infinitely long.
To be clear, I don't think any of this is a good idea, just pointing out that if he's going to allow that kind of definition of pi (ie, admit a definition that is just an infinite list of decimal representations), you can just do the same thing with any real number you like. He of course will say that he's _not_ allowing any _infinite list_, only an arbitrary long one.
That's the key point though, this list isn't infinitely long, and all the numbers in it are rational. And it is an algorithm (specifically, a lookup table).
All the numbers you get this way are going to be rational, and if you require them to be finite, you can't even identify them with any irrational numbers. At least with the computable numbers you get an infinite set of irrational numbers along with the rationals, while still never touching the vast majority of all numbers (the remaining, incomputable irrationals).
To an ultrafinitist, there is no such thing as a number that is inexpressible.
Right, but to be clear, it's not that ultrafinitists like Wildberger believe that they can express all the real numbers; rather, they believe that those inexpressible real numbers don't actually exist.
How does that work for calculus which regularly looks at the limits of functions as x approaches infinity and has very real real world applications that stem from such algorithms?
Here is a paper on just how a serious ultrafinitist copes with that https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/...
The short answer is that they deal with such things symbolically.
Math in general needs to have a big blinking "don't confuse the map for the territory" label on it.
E.g. when you calculate the area of a plot of land do you take into account the curvature of the Earth? You have to make a bunch of compromises in the first place to even talk about what the area of a plot land means.
Math is a bunch of useful systems that we humans have devised. We tend to gravitate towards the ones that help us describe and predict things in the real world.
But there is plenty of math which doesn't do either. It's just as real as the math that does.
I agree. I’m just trying to speak to the “realist” argument that Wilderberg presents claiming that some numbers aren’t real and there’s no point talking about them when they come from very practical mathematics let alone the ones that aren’t. In no way was I trying to claim that some part of maths aren’t real - I was trying to understand the consistency of what to me seems like a confusing argument to make.
I don't think uncomputable numbers "come from very practical mathematics"! Rather, they come from Gödel, Church, and Turing demolishing Hilbert's program of solving the Entscheidungsproblem once and for all. Possibly, if Hilbert had succeeded, that would have made it "very practical mathematics", or possibly not, but that counterfactual is reasoning from a logical contradiction.
https://plato.stanford.edu/entries/church-turing/decision-pr...
By fiat, of course. :) (e.g. https://en.wikipedia.org/wiki/Indiana_pi_bill)
I never said they were, but could've sworn that the Wikipedia page or parent comment did (I can't find it now and am questioning my sanity). I couldn't understand how he could try to get rid of them, although this isn't surprising as mathematics is basically magic to me once you get past calculus. I guess this is only about removing irrationals though.
It depends on the particular construction. You could construct the "complex rational field" by adding i to the rationals with the rule i^2 = -1. That seems to be what Wildberger is ok with. The standard complex numbers involve adding i to the real numbers, which Wildberger doesn't like.
I don't know how you'd do electrical engineering with the rational complex field, because electrical engineering and physics in general involves a lot of irrational quantities and calculus, and the standard foundations of these concepts use real numbers.
It's really up to finitists to show that there are problems with these methods and that they have a better way of doing things, because so far the standard way seems to work very well.
Electricity has always been standing by to do the same things regardless of how far your imagination wanders away from where it started.
Electricity is not standing by, it is malevolently trying to burn out your equipment. If you allow your imagination run too far it’ll heat up your equipment and burn it out. You need to increase your capacity to keep your imagination in check.
Alright, I'll bite:
To defend Wildberger a bit (because I am an ultrafinitist) I'd like to state first that Wildberger has poor personal PR ability.
Now, as programmers here, you are all natural ultrafinitists as you work with finite quantities (computer systems) and use numerical methods to accurately approximate real numbers.
An ultrafinitist says that that's really all there is to it. The extra axiomatic fluff about infinities existing are logically unnecessary to do all the heavy lifting of the math that we are familiar with. Wildberger's point (and the point of all ultrafinitist claims) is that it's an intellectual and pedagogical disservice to teach and speak of, e.g. Real Numbers, as if they're actually involving infinite quantities that you can never fully specify. We are always going to have to confront the numerical methods part, so it's better to make teaching about numbers methodologically aligned with how we actually measure and use them.
I have personally been working on building various finite equivalents to familiar math. I recommend anyone to read Radically Elementary Probability Theory by Nelson to get a better sense of how to do finite math, at least at the theoretical level. Once again, on a practical level to do with directly computing quantities, we've only ever done finite math.
I like to imagine at the end of the human race when the sun explodes or whatever, some angelic being will tally up all the numbers ever used by humans and confirm that there are only finitely many of them. Then they'll chalk a tally on the scoreboard in favor of the ultrafinitists.
As long as someone isn't a crank (e.g. they aren't creating false proofs) I enjoy the occasional outsider.
Heh that's about the only place and time when we'll know for sure, and until then, it's just high-grade banter :)
An eventual output of a calculation has to be a finite result, but the concepts that we use to get there are often not.
The standard way of setting up calculus involves continous magnitudes, hence irrational quantities, and obviously that's used all over physics and there doesn't seem to be a problem with it.
I think to make a compelling case for a finitist foundation for maths you would at the least have to construct all of the physically useful maths on a finitist basis.
Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.
> Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.
Well these are probably easy to find even now? E.g the Banach-Tarsky paradox is unlikely to be provable in finitist math which is somewhat of an improvement.
I was thinking more about applications in physics where calculus and irrational quantities are used all the time.
At more advanced levels the theories are based on differential geometry and operators on Hilbert space. I'm not sure if fully worked out finitist versions of these even exist. Where finitist versions do exist, they're often technically more difficult to use than the standard versions, which is the opposite of an improvement in my view.
Whether it's undesirable for your mathematical foundation to prove the Banach-Tarski paradox is debatable. It's counter-intuitive, but doesn't lead to contradictions, as far as is known. It doesn't apply to physics because the construction uses non-measurable sets.
I'm not a finitist myself but my understanding is that it has to do as much with physics as does ZFC, which is very little. The math used in physics works on practice and did work long before the question of foundations even came up.
The problem that bothers some mathematicians is that despite working well math still lacks a solid foundation. Furthermore it's basically proven that these foundations can't even exist, or at least for the mainstream version of math. This is where non-mainstream versions pop up. The denial of uncountable sets does help you resolve some of the paradoxes. Not all unfortunately, even the countable sets already lead to things like incompleteness theorems. Well, one can dream.
>An eventual output of a calculation has to be a finite result, but the concepts that we use to get there are often not.
This is so true but it can be good if you're flexible enough to try it either way.
With massive tables of physical properties officially produced by pages of 32-bit Fortran it really did look like floating-point was ideal at first. Because it worked great.
The algorithm had been stored as a direct mathematical equation, plain as day, exactly as deduced with constants and operations in 32-bit double-precision floating point.
But when the only user-owned computers were still just 8-bit machines, there was no way to reproduce the exact results across the entire table to the same number of significant figures, using floating point.
Since it's a table it is of course not infinite, and a matrix to boot. A matrix of real numbers across an entire working spectrum.
The algorithm takes a set of input values, calculates results as defined, and rounds it off repeatably in the subsequent logic before output, so everyone can get agreement. The software OTOH takes a range of input values and outputs a matrix. And/or retains a matrix in "imaginary" spreadsheet form for later use :)
Every single value in the matrix is a floating-point representation of a real number, but they are rounded off as precisely as possible to the "exact" degree of usefulness, making them functionally all finite values in the end. This took a lot of work from top mathematicians, computer scientists, and engineers. And as designed, the matrix then carries the algorithm on its own without reference to the fundamental equation.
The solution turned out to involve working backward from the matrix reiteratively until an alternate algorithm was found using only integers for values and operations, up until the final rounding and fixed.point representation at the end. Dramatically unrecognizable algorithm but it worked and only took 0.5 kilobytes of 8-bit Basic code which was a fraction of the original Fortran.
This time the feature that showed up without having to make extra effort was the property of being more precise based directly on increased bitness of the computer, without need for floating-point at all. Of course the Fortran code accomplished this too by the wise use of floating-point but it took a lot bigger iron to do so. And wasn't going to be battery powered any time soon way back then.
>somehwere this finitist foundation disagrees with the results obtained by the standard foundation,
>there's no reason to think the standard foundation is in error.
This is "exactly" how it was. There were disagreements all over the place but they were in further decimal places not representable by the table. The standard was an international standard having carefully agreed-upon accuracy & precision, as defined by the Fortran which really worked and was then written in stone, with any nonmatched output being a notable failure.
I wonder what ultrafinitists do about topology.
Topology, i.e. the analysis of connectivity, is built upon the notion of continuity and infinite divisibility, which seems to be difficult to handle in an ultrafinitist way.
Topology is an exceedingly important branch of mathematics, not only theoretically (I consider some of the results of topology as very beautiful) but also practically, as a great part of the engineering design work is for solving problems where only the topology matters, not the geometry, e.g. in electronic schematics design work.
So I would consider any framework for mathematics that does not handle well topology as incomplete and unusable.
Ultrafinitist theories may be interesting to study as an alternative, but the reality is that infinitesimal calculus in its modern rigorous form does not need any alternatives, because it works well enough and until now I have not seen alternatives that are simpler, but only alternatives that are more complicated, without benefits sufficient to justify that.
I also wonder what ultrafinitists do about projective geometry and inversive geometry.
I consider projective geometry as one of the most beautiful parts of mathematics. When I encountered it for the first time when very young, it was quite a revelation, due to the unification that it allows for various concepts that are distinct in classic geometry. The projective geometry is based on completing the affine spaces with various kinds of subspaces located at an "infinite" distance.
Without handling infinities, and without visualizing how various curves located at infinity look like (as parts of surfaces that can be seen at finite distances), projective geometry would become very hard to understand, even if one would duplicate its algorithms while avoiding the names related to "infinity".
Similarly for inversive geometry, where the affine spaces are completed with points located at "inifinity".
Such geometries are beautiful and very useful, so I would not consider as usable a variant of mathematics where they are not included.
So what is the length of the diagonal of a unit square, if not square root of 2? It can’t be rational—how is that rationalized by Wildberger?
I don't know about Wildberger specifically, but an interesting point is that only a countable subset of real numbers can be described like that. Polynomials with integers coefficients are countable and so are their roots, which means almost all real numbers are transcendental.
We think we study the real numbers but it seems we can't even have a system to express them. And indeed, that's not even a limitation of algebraic systems: any notation over a finite alphabet can only express a countable set of distinct objects which amounts to nothing when real numbers are concerned.
I'm not a finitist, but I do find it curious that we approach mathematics by inventing a more-than-infinite set of objects that's impossible to fully grasp. I don't see it as a bad thing though, I also love Complex Analysis and many people (and some mathematicians even) denounce them for being imaginary. My impression is that transcendental numbers are as imaginary as are imaginary numbers, it's just we don't notice. And they're obviously still useful as are the complex numbers.
> which means almost all real numbers are transcendental
Definable numbers like 2, pi, or Chaitin's constant [0] are countable. The reals are only uncountable because of numbers we can't even talk about.
[0] https://en.wikipedia.org/wiki/Chaitin%27s_constant
That is my point, yes.
If you do not accept that space is infinitely divisible, then the diagonal of a unit square does not actually exist in the space.
A long time has passed since the paradoxes of Zeno of Elea, so now there really is no reason for not accepting that space is infinitely divisible.
The error of Zeno of Elea was that he did not understand the symmetry between zero and infinity (or he pretended to not understand it).
Because of this error, Zeno considered that infinity is stronger than zero, so he believed or pretended to believe that zero times infinity is infinity, instead of recognizing that zero times infinity can be any number and also zero or infinity.
For now, there exists no evidence whatsoever that the physical space and time are not infinitely divisible.
Even if in the future it would be discovered that space and time have a discrete structure, the mathematical model of an infinitely divisible space and time would remain useful as an approximation, because it certainly is simpler than whatever mathematical model would be needed for a discrete space and time.
> For now, there exists no evidence whatsoever that the physical space and time are not infinitely divisible.
What is your evidence for it? You want to make a claim about something being infinite, it is up to you to provide evidence.
> recognizing that zero times infinity can be any number and also zero or infinity.
This statement makes no sense in formal mathematics. Multiplication is a function, which means for each set of inputs there is one output. I imagine you are trying to say something about limits here, but the language you are using is very imprecise.
Read Wildberger if you want to know what he thinks.
I can tell you that it is the output of a function, not a distinct entity that exists on its own independently of the computation.
The whole point is that as a theory for the foundations of mathematics, you do not need to assume numbers with infinitely long decimal expansions in order to do math.
> I can tell you that it is the output of a function, not a distinct entity that exists on its own independently of the computation.
Could you elaborate? What is the output of that function if not an entity in it's own? Having studied math with philosophiy minor long time ago I am curious.
It's part of a dependency relation, the function computes and produces an output that we call sqrt(2).
On the other hand, using the axioms of ZFC, one can say any real number exists without having a function to compute it, or a proof to construct it.
For an ultrafinitist, or any finitist for that matter, we say that you only need the minimum of ingredients to produce math -- you do not need to assume anything over and above that, as it's not even helpful in the verification process.
So assuming only finitely many symbols and finitely many numbers, I can produce what we call sqrt(2). We only ever verify it numerically and finitely anyways. We can never reach decimals at infinite ordinals.
So it makes no sense to say, "Hey I assume transfinitely many entities, and my assumption says these numbers exist even though the proofs and decimal expansions are only ever finite."
> numbers methodologically aligned with how we actually measure and use them.
We use numbers in compact decimal approximations for convenience. Repeated rational series are cumbersome without an electronic computer and useless for everyday life.
The point is not about restricting what notational conveniences you prefer.
The point is to not confuse the notational convenience with the underlying concept that makes such numbers comprehensible in the first place.
If you're interested in ancient math, take a look at Eleanor Robson's accessible paper on the Plimpton 322 tablet:
https://scispace.com/pdf/words-and-pictures-new-light-on-pli...
Robson's argument is that it isn't a trig table in the modern sense and was probably constructed as a teacher's aide for completing-the-square problems that show up in Babylonian mathematics. Other examples of teaching-related tablets are known to exist.
On a quick scan, it looks like the Wildberger paper cites Robson's and accepts the relation to the completing-the-square problem, but argues that the tablet's numbers are too complex to have been practical for teaching.
More detailed video of Plimpton 322 from the authors of the paper https://youtu.be/L24GzTaOll0?si=sNdwKiM7uYXbzVfL
>”He bought it from Edgar Banks, a diplomat, antiquities dealer and flamboyant amateur archaeologist said to have inspired the character of Indiana Jones – his feats included climbing Mount Ararat in an unsuccessful attempt to find Noah’s Ark – who had excavated it in southern Iraq in the early 20th century.”
A little off-topic, but as a non native English speaker this sentence in the article made me look up whether there’s scientific consensus that Noah’s Ark has been found and I’d just never heard about it. Turns out there isn’t, and the end of the sentence actually refers to the tablet. Was still a fun rabbit hole to go down.
Turns out the PIN on the ancient tablet was just 1-2-3-4-5
https://www.cnbc.com/2019/04/10/toddler-locks-ipad-for-48-ye...
You mean I-II-III-IV-V?
[dead]