Pentation? How quaint. For other Large Number fans, David Metzler has a wonderful playlist that goes way down the rabbit hole of the fast growing hierarchy:
Almost all of these mind-bogglingly large numbers are built around recursion, like the Ackermann function, which effectively has an argument for the number of Knuth up arrows to use. Then you can start thinking about feeding the Ackermann function into that slot and enjoy the sense of vertigo at how insane that becomes.
I find it fascinating how quickly the machinery of specifying large numbers probes the limits of what's definable within the mathematical systems we use.
It's funny how, for a certain subset of math, a researcher's life can be condensed to "arguing about which number is the biggest (preschool)" -> "learning about math" -> "arguing about which number is the biggest (academia)"
You might already know this, but the busy beaver function grows faster than any computable function. So although the best known lower bound of BB(6) can be expressed with just pentation, generally speaking the BB function is certainly beyond any standard operation in terms of fast growth
It's been shown to be surpassed at n=150, which as you note is likely very generous. Hypertree would typically only require a few more states. Hypertree doesn't grow meaningfully faster than Tree. Hypertree(3) is what would be called a Salad number, combining a very fast growing function with some very weak one(s) such as iteration, which in the Fast Growing Hierarchy corresponds to taking the successor if an ordinal.
There is something awesome about incredibly large finite numbers. People gush about infinity, but I find it to be a pretty boring concept compared to finite numbers too large to even be written in this universe.
Infinity is aspirational. Infinity is a concept, simple and self-evident, yet paradoxical and oxymoronic.
I get kind of intrigued by these large number things at first but ultimately it feels like kind of a less elegant version of the same thing? It's all this mucking about with multiples and powers of multiples and powers when it's like...we've already taken this to the limit, we can just stand back and marvel at that, what are we looking for? We already know you can always add another digit, why invent more and more complicated ways to do that?
This isn't meant to be contradictory to what you're saying or anything, just interesting to explore these different perspectives on what mathematical concepts capture the imagination.
I'm wondering if there's a connection between large number hunters, unwritten rule proponents in sports and games, and modular synth collectors. There's a sort of satisfaction derived from finding and defining order according to aesthetic structures that are largely arbitrary but also emotionally resonant.
Meanwhile, infinity is for those that embrace chaos, minimalism, nothingness.
Like, there is a perfectly finite number, but is so large that there simply isn’t enough information in the universe
to encode it in any format. How cool is that to just think about for a while?
I think such a number is going to have strange properties like, some number bigger than that unencodable number is encodable because of a special pattern that allows a special non-surjective recursive function to encode it. I am just wondering if there really is smallest number for which no number greater than it is encodable.
It is not obvious to me that the definition of an encodable function has bounded growth: is it true that f(1) - f(0) for encodable f always has a bound given by the amount of data used to encode f? What is that bound?
The parent suggested that the number couldn't be encoded due to its largeness rather than its value. So while any number n with Kolmogorov complexity K(n) > 10^100 cannot be recursively encoded in the known universe, that number n need only be 10^100 bits long. On the other hand a number that's too large to be recursively encoded in the known universe would have to exceed BBλ2(10^100),
where BBλ2 is an optimal busy beaver for prefix-free binary programs [1].
Yes, I understood what the parent suggested. I am pointing out that such a number may have strange properties like the fact that a number larger than it can have a smaller Kolmogorov complexity, then I am questioning whether there is a number such that every number larger than it has such a large Kolmogorov complexity that it cannot be encoded. The question therefore becomes, is there a limit to the size of physically describable numbers? Or is there always going to be some number larger with some trivial kolmogorov complexity?
Postulate: You cannot define a largest physically describable number.
My assumption is that due to the very nature of Kolmogorov complexity (and other Godel related / halting problem related / self referential descriptions), this is not an answerable or sensible question.
It falls under the same language-enabled recursion problems as:
- The least number that cannot be described in less than twenty syllables.
- The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than a googol (10^100) symbols.
Would you like to also be able to communicate about this number? You might have to reserve some atoms to form a being that could actually enjoy it. Considering such a perspective, the decoder for observing it should probably be embedded in the number itself.
If you could encode it that way, then it's incoherent. After all, that encoding exists within the universe. If it resolved to a value, that would disqualify that value from being correct because of the self reference.
Not really as that implies that you have a list of numbers that can be encoded within the universe, but the universe would run out of room keeping that list.
For me it is far more to consider the number of atoms in the solar system, as it implies a pretty obvious limit on the data storage that humanity can meaningfully use. Obviously only a tiny fraction of those atoms can be used to store information & furthermore the energy required to actually perform that storage is enormous compared to the energy we have available at present.
There is enough information if you assume reality is continuous. Pick a point A to be the origin. Then then you can encode the number by placing something at 1/N meters away from the origin.
Relativistic speeds can contract the length of any object as measured from an outside observer. If an object the size of 1 Planck length travels fast enough: you won't be able to measure it, as from your position it would be smaller than the Planck length as it passes by.
It's not impossible (afaik) for things to be smaller than the Planck length. We just don't have the ability (maybe ever) to measure something smaller than this limit.
Now, good luck finding something the size of 1 Planck length, and also to accelerate it to relativistic speeds.
Frustratingly, attempts to discretize space invariably run into problems with relativity, since they effectively impose a preferred frame of reference. I.e. you can impose a minimum distance, but relativistic length contraction means that observers measure different minima and in different directions.
Apparently, under some of these models, this implies that the speed of light ends up depending on wavelength, lending them to empirical tests. My understanding is that these discrete space models have failed to line up with experiment, at least within the limits of measurement.
The current theories use continuous space and time. However, we can't encode information into space itself. We would have to use some configuration of matter, and then there are limits to how well-defined a particle's position can be coming from the uncertainty principle.
On the other hand, general relativity implies that if you put enough matter in a small enough space it becomes a black hole, and then we can't access the information in it.
IANA physicist but I think this line of thought is probably speculative at the moment because it involves both general relativity and quantum mechanics and it isn't known how they should work together.
That's currently unknown. For all current practical purposes, kind of. The plank length sets a limit on spatial resolution of any information, so a finite region with (universally) bounded entropy per conceivable bucket on that scale still has finite entropic capacity.
You are stretching the definition of "is" here. Almost all finite numbers are impossible to actually count up to, so they exist only in the same sense that infinite numbers exist.
If you have a child who likes math I highly recommend "Really Big Numbers" by Richard Schwarz. Tons of nice illustrations on how to "take bigger and bigger steps".
I really like the Busy Beaver stuff. I wish I had been exposed to it (at lest enough to play with it some) in high school. It reminds me some of Jorge Luis Borges' story "The Library of Babel".
Does anybody know of other interesting problems in the Busy Beaver space?
Fictionally, maybe the Mandelbrot Maze mentioned in Arthur C. Clarke’s 3001:
> Their approach
was more subtle; they persuaded their host machine to initiate a program which
could not be completed before the end of the universe, or which - the
Mandelbrot Maze was the deadliest example - involved a literally infinite series
of steps.
I loved that part of the book. I thought it was so cool how they thoroughly scrubbed the Internet of these dangerous programs, and then stored the very last copy on a disk in an ancient lava tube on the moon, just in case.
No. The busy beaver function grows faster than any computable function.
What's more, for any N > 549, you cannot prove (in the usual universe of mathematics) that any computable number is equal to or larger than BB(N). (At least that's my understanding of the independence results (https://wiki.bbchallenge.org/wiki/Independence_from_ZFC))
The algorithm for BB(N) is (with some hand waving) “for each N-state Turing machine, check if it halts on an initially empty tape. If it does not, skip it. If it does, run it until it halts and count the number of steps, k. BB(N) is the maximum value of k you find.”
The problem is that the naive algorithm above requires an oracle for the Halting Problem, which every CS student learns is impossible in general for a computational model that isn’t more powerful than a Turing Machine.
So if a function can be expressed in terms of a Turing machine, busy beaver has to grow faster because eventually for some value of N, you can just encode the candidate function as a Turing machine and have Busy Beaver simulate it.
Busy Beaver hunters on the low end (trying to find the next number) have to exhaustively prove whether each Turing machine eventually halts or not at that number of states so that they can evaluate the longest running one with finite execution.
Busy Beaver is non computable and grows much faster than the various subcubic graph numbers (you could easily encode a subcubic graph number computation in a relatively small turing machine).
TREE(3) is so crazy to me. You would expect it to either be a reasonable number, or to be infinite (i.e. there's an infinite sequence of such trees).
But no. It's just an _impossibly unfathomably large number_. Eventually, there's no more trees with less than $i$ vertices, but only at such a stupid incredibly large $i$ that it makes other "googology" numbers vanish by comparison.
> it makes other "googology" numbers vanish by comparison.
I consider TREE to be medium sized in the googology universe. It obliterates Graham's Number of course, as well as Goodstein sequence and simple hydras. But it's easily bested by an ingenious matrix generalization of hydras known as the Bashicu Matrix System, even when fixed at 2 rows. 3-row BMS goes far beyond that, let alone n-row. BMS is so simple that it can be encoded in under 50 bytes [1], where TREE would take significantly more.
Beyond that there vastly greater numbers still such as Loader's Number. I shouldn't mention Rayo's because that's no longer computable in the usual sense.
There is a downvoted comment that reads "ah yes the totally new math of exponentiation". The snark is uncalled for, but that's actually the essence of this article: it talks about repeated exponentiation as if it were some profound mathematical discovery.
It isn't. The article neglects to explain what makes busy beaver numbers interesting in the first place. And I think it's symptomatic of Quanta Magazine articles that feature on HN several times a week. A profoundly-sounding title and pleasant writing, but not much essence beyond that.
Indeed, I would have at least liked to get a rough understanding of the tricks they use to classify and discard Turing machines, and how they construct these enormous runtime calculations. They are clearly computing something but they are obviously not actually running the machines normally for those numbers of steps.
you gotta pay for that. or dig through the academic publisher archives; which you also gotta pay for unless you believe in digital piracy and evil copyright infringement which may or may not fund terrorism
like they used to say: information wants to be expensive so pay to be free
I usually get drawn into their posts also and didn’t realize they were low value; my IQ must not be high enough to differentiate. What would you recommend as alternative sources?
I don't think there's anything that would put some new, esoteric math concept in your mailbox every week, although there's plenty of books that cover recreational mathematics in an accessible way (Martin Gardner, Ian Stewart, etc). And for QM articles, I recommend searching the web - you can often find better explanations on some 1999-style blog somewhere.
The problem with this particular article is simple: busy beavers numbers aren't interesting because they're big. They don't break mathematics because of that; you can always say "+1" to get a larger number. There's also nothing particularly notable about Knuth's up-arrow notation, which is essentially a novelty that you're never gonna use. Instead, the numbers are interesting because they have fairly mind-blowing interactions with the theory of computability.
Don't let specialists detract from your enjoyment of Quanta-level articles. This one is well-written, makes no egregious errors, and only omits one important fact. And that fact is ...
All this talk about exponentiation, tetration and pentation have their roots in Peano arithmetic, which for reasons of logical clarity, defines precisely one function -- the "successor function":
s(n) = n+1
Using just this function, Peano defines addition, multiplication and exponentiation with great clarity. Since Peano's time, people have been making adjustments, like including zero among the counting numbers, and by extending exponentiation into tetration, pentation and a few more exotic operations fully discussed in the linked article and elsewhere.
I personally would have liked to see a more complete exposition of the logical roots of the Busy Beaver challenge, and I think the missing parts would have made the article a better read, even for non-specialists. Maybe especially for those readers.
But Quanta articles are perfectly suited to their audience, people who may be inspired to look for more depth elsewhere.
There isn't really anything better. The Notices of the AMS has one feature article each issue, but those are sometimes too technical.
Anyway, this article seems fine to me. The "exponentiation" comment seems like a bizarre misreading. The article is just trying to explain how big BB(6) is. Before that, it explains what BB(n) is. To think it's solely about exponentiation you have to skip that entire section.
It's crazy to me that we're now writing articles about the fact that a large number is large.
Hey guess what, I can imagine a number even larger. It's BB(6) + 1, isn't that amazing and interesting? Wow imagine BB(6) multiplied by a googolplex, amazing. Wow, such number.
What's the point? Numbers are infinite, what else is new?
What's the point of your comment? To suck the joy out of everything?
Turing machines are a fundamental object of study within the theory of computation. The complexity and wild behavior that arises from even the simplest machines is a cool discovery. BB(6) was thought to be a lot smaller, but it turns out to be really huge. The Busy Beaver game is also interesting to those who work on combinatorics and theorem provers. And of course many many people in the space of recreational math & volunteer computing love this challenge.
Hmm, what a shame that the LaTeX content wasn't properly rendered in the linked article. When I noticed this I assumed someone had forgotten to include the MathJax JavaScript library, which, when present, converts LaTeX notation into properly rendered LaTeX client-side.
As it turns out, the MathJax library is called from the article's HTML (in line 1765), but for some reason it's not working.
A long-running debate argues that LaTeX rendering should be a default browser feature, so far without success. In the meantime MathJax works pretty well unless someone disables JavaScript client-side.
Reasonable people may differ, but not being able to render LaTeX is egregious and wrong.
In a mathematical sense - absolutely. You can dual halting problem against many very tangible qualities - like whether a (proved) statement is true or false. A (large-n) halting program is closer to an instantly halting program not just because n is always closer to 0 than inf, but because 'large n halting' and 'instantly halting' are ontologically similar in a way they just aren't with unhalting programs.
> In 1936, the pioneering logician Alan Turing proved that there’s no universal procedure for answering this question, which became known as the halting problem. Any method that works for some programs will fail for others, and in some cases, no method will work.
This is weirdly stated. The first sentence is correct. But trivially, either a machine that always says “yes” or a machine that always says “no” will give the correct answer for any given case.
Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means. But that’s going beyond the halting problem.
The objection, which I agree with, is to the statement "Any method that works for some programs will fail for others, and in some cases, no method will work."
There is no case where no method whatsoever will work. It's true that for any method, there are cases where it fails but it's not true that there exist cases for which every method fails.
It really hinges on what is meant by "this question", i.e., Given the code of a computer program, can you tell whether it will eventually stop or run forever?
If what's given to you is the code and its input, then I think the statement's correct.
However, if the input is assumed to be either the code itself or any other fixed thing, then I agree with you.
I don't see how what's given to you changes things? If there's ambiguity, I think it's in whether the question is actually the halting problem:
If it is the halting problem — less ambiguously written as "given the code of a computer program and its input, will it run forever?" — then the statement is incorrect: there is a method that returns the correct result for every possible program and input.
If it is about proving whether a machine halts — not getting it right by chance, but showing that a particular machine halts or runs forever — then the statement is correct, for any set of axioms there are Turing machines that cannot be proven to halt or run forever using those axioms.
Because you wrote "But trivially, either a machine...", I think you must think that the second sentence in your quote means "For every program p, there does not exist any method m that, when applied to p, determines whether p halts" -- which would clearly be false, since it allows every program to have its own method (and as you say, just 2 "constant-valued methods" suffice). But I interpret that second sentence in your quote as "There does not exist a single method m that, when applied to any program p, determines whether p halts", which is correct.
>Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means.
I also think that's what the article means, and I think it's a very nice way of putting it, though it's an interesting enough wrinkle to discuss further.
I think their issue is with the end of the sentence: "and in some cases, no method will work", which I interpret to say there are some programs for which no method can determine whether they will halt.
That's obviously incorrect; the program either halts or it does not. So if you had function that always returned true and another that always returned false, one of those functions would correctly tell you whether a given program will halt — for every program that could possibly exist.
The hard part is figuring out which function to use :)
Pentation? How quaint. For other Large Number fans, David Metzler has a wonderful playlist that goes way down the rabbit hole of the fast growing hierarchy:
https://www.youtube.com/playlist?list=PL3A50BB9C34AB36B3
Highly recommended.
Almost all of these mind-bogglingly large numbers are built around recursion, like the Ackermann function, which effectively has an argument for the number of Knuth up arrows to use. Then you can start thinking about feeding the Ackermann function into that slot and enjoy the sense of vertigo at how insane that becomes.
I find it fascinating how quickly the machinery of specifying large numbers probes the limits of what's definable within the mathematical systems we use.
It's funny how, for a certain subset of math, a researcher's life can be condensed to "arguing about which number is the biggest (preschool)" -> "learning about math" -> "arguing about which number is the biggest (academia)"
You might already know this, but the busy beaver function grows faster than any computable function. So although the best known lower bound of BB(6) can be expressed with just pentation, generally speaking the BB function is certainly beyond any standard operation in terms of fast growth
Tree(n) is considered computable.
BB(n) surpasses Tree(n) by - at most - when n=2645.
And likely shortly after BB(100).
Now consider the following definition for an exponentially faster growing number:
HyperTree(n) is where the Tree function is nested x times, where x is the result of tree(n).
HyperTree(3) = Tree(Tree(Tree..{Tree(3) long}...Tree(3))))...)
BB(X) would (should) still outpace HyperTree(x) at some value of x.
I don't know where to begin even to give an upper or lower bound of when BB(x) is larger than HyperTree(x).
It's been shown to be surpassed at n=150, which as you note is likely very generous. Hypertree would typically only require a few more states. Hypertree doesn't grow meaningfully faster than Tree. Hypertree(3) is what would be called a Salad number, combining a very fast growing function with some very weak one(s) such as iteration, which in the Fast Growing Hierarchy corresponds to taking the successor if an ordinal.
Numberphile has also built-up a fantastic collection of videos over the years about unfathomably large finite numbers:
https://youtube.com/playlist?list=PLt5AfwLFPxWJ_FwchNAjB3xa8...
The tools required to probe and compare these monsters are so interesting.
This too:
https://www.mrob.com/pub/math/largenum.html
Cool wiki linked there: https://googology.fandom.com/wiki/Googology_Wiki
Reminds me of the Scott Aaronson essay ["Who Can Name the Bigger Number"](https://www.scottaaronson.com/writings/bignumbers.html)
There is something awesome about incredibly large finite numbers. People gush about infinity, but I find it to be a pretty boring concept compared to finite numbers too large to even be written in this universe.
Infinity is aspirational. Infinity is a concept, simple and self-evident, yet paradoxical and oxymoronic.
I get kind of intrigued by these large number things at first but ultimately it feels like kind of a less elegant version of the same thing? It's all this mucking about with multiples and powers of multiples and powers when it's like...we've already taken this to the limit, we can just stand back and marvel at that, what are we looking for? We already know you can always add another digit, why invent more and more complicated ways to do that?
This isn't meant to be contradictory to what you're saying or anything, just interesting to explore these different perspectives on what mathematical concepts capture the imagination.
I'm wondering if there's a connection between large number hunters, unwritten rule proponents in sports and games, and modular synth collectors. There's a sort of satisfaction derived from finding and defining order according to aesthetic structures that are largely arbitrary but also emotionally resonant.
Meanwhile, infinity is for those that embrace chaos, minimalism, nothingness.
> People gush about infinity, but I find it to be a pretty boring
You need to look at this:
https://neugierde.github.io/cantors-attic/
Yeah!
Like, there is a perfectly finite number, but is so large that there simply isn’t enough information in the universe to encode it in any format. How cool is that to just think about for a while?
I think such a number is going to have strange properties like, some number bigger than that unencodable number is encodable because of a special pattern that allows a special non-surjective recursive function to encode it. I am just wondering if there really is smallest number for which no number greater than it is encodable.
It is not obvious to me that the definition of an encodable function has bounded growth: is it true that f(1) - f(0) for encodable f always has a bound given by the amount of data used to encode f? What is that bound?
The parent suggested that the number couldn't be encoded due to its largeness rather than its value. So while any number n with Kolmogorov complexity K(n) > 10^100 cannot be recursively encoded in the known universe, that number n need only be 10^100 bits long. On the other hand a number that's too large to be recursively encoded in the known universe would have to exceed BBλ2(10^100), where BBλ2 is an optimal busy beaver for prefix-free binary programs [1].
[1] https://oeis.org/A361211
Yes, I understood what the parent suggested. I am pointing out that such a number may have strange properties like the fact that a number larger than it can have a smaller Kolmogorov complexity, then I am questioning whether there is a number such that every number larger than it has such a large Kolmogorov complexity that it cannot be encoded. The question therefore becomes, is there a limit to the size of physically describable numbers? Or is there always going to be some number larger with some trivial kolmogorov complexity?
Beyond BBλ2(10^100), there is no larger number with complexity under 10^100.
I absolutely love this question.
Postulate: You cannot define a largest physically describable number.
My assumption is that due to the very nature of Kolmogorov complexity (and other Godel related / halting problem related / self referential descriptions), this is not an answerable or sensible question.
It falls under the same language-enabled recursion problems as:
- The least number that cannot be described in less than twenty syllables. - The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than a googol (10^100) symbols.
Would you like to also be able to communicate about this number? You might have to reserve some atoms to form a being that could actually enjoy it. Considering such a perspective, the decoder for observing it should probably be embedded in the number itself.
but couldn't you encode it as "the smallest number that cannot be encoded within the universe's matter/entropy"?
If you could encode it that way, then it's incoherent. After all, that encoding exists within the universe. If it resolved to a value, that would disqualify that value from being correct because of the self reference.
What's the decoding algorithm for that?
Not really as that implies that you have a list of numbers that can be encoded within the universe, but the universe would run out of room keeping that list.
For me it is far more to consider the number of atoms in the solar system, as it implies a pretty obvious limit on the data storage that humanity can meaningfully use. Obviously only a tiny fraction of those atoms can be used to store information & furthermore the energy required to actually perform that storage is enormous compared to the energy we have available at present.
There is enough information if you assume reality is continuous. Pick a point A to be the origin. Then then you can encode the number by placing something at 1/N meters away from the origin.
No, because you can’t have something have that precisely defined of a position.
you can - you merely need enough energy to precisely define it (as according to the heisenberg uncertainty principle)!
Unless...such an energy exceeds the Schwarzschild radius...
So you can’t.
You can't halve a Planck length so you're limited to ~1.6×10^−35.
I think current theories break down at less than a Planck length, but they are not constrained to integer multiples of it.
Relativistic speeds can contract the length of any object as measured from an outside observer. If an object the size of 1 Planck length travels fast enough: you won't be able to measure it, as from your position it would be smaller than the Planck length as it passes by.
It's not impossible (afaik) for things to be smaller than the Planck length. We just don't have the ability (maybe ever) to measure something smaller than this limit.
Now, good luck finding something the size of 1 Planck length, and also to accelerate it to relativistic speeds.
By definition you can if you accept it's continuous.
No, because space might be continuous, but that doesn’t mean the uncertainty principle and the Planck limit disappear..
The Compton wavelength will probably cause trouble for the storage scheme long before gravity becomes a problem.
Those are only relevant for decoding it.
Layman question. But can space be quantinized (not sure what is proper term)? Like there is finite positions for particle between two points?
Frustratingly, attempts to discretize space invariably run into problems with relativity, since they effectively impose a preferred frame of reference. I.e. you can impose a minimum distance, but relativistic length contraction means that observers measure different minima and in different directions.
Apparently, under some of these models, this implies that the speed of light ends up depending on wavelength, lending them to empirical tests. My understanding is that these discrete space models have failed to line up with experiment, at least within the limits of measurement.
The current theories use continuous space and time. However, we can't encode information into space itself. We would have to use some configuration of matter, and then there are limits to how well-defined a particle's position can be coming from the uncertainty principle.
On the other hand, general relativity implies that if you put enough matter in a small enough space it becomes a black hole, and then we can't access the information in it.
IANA physicist but I think this line of thought is probably speculative at the moment because it involves both general relativity and quantum mechanics and it isn't known how they should work together.
That's currently unknown. For all current practical purposes, kind of. The plank length sets a limit on spatial resolution of any information, so a finite region with (universally) bounded entropy per conceivable bucket on that scale still has finite entropic capacity.
You are stretching the definition of "is" here. Almost all finite numbers are impossible to actually count up to, so they exist only in the same sense that infinite numbers exist.
Indeed! Funnily enough, there seems to be something similar going on between defining large finite numbers and defining large countable cardinals.
And I’m assuming they can be converted to incredibly small finite numbers just as easily?
If you have a child who likes math I highly recommend "Really Big Numbers" by Richard Schwarz. Tons of nice illustrations on how to "take bigger and bigger steps".
"Infinity is farther away than you thought."
I really like the Busy Beaver stuff. I wish I had been exposed to it (at lest enough to play with it some) in high school. It reminds me some of Jorge Luis Borges' story "The Library of Babel".
Does anybody know of other interesting problems in the Busy Beaver space?
https://www.scottaaronson.com/papers/bb.pdf
This paper contains many conjectures around BB that could be interesting to some.
Fictionally, maybe the Mandelbrot Maze mentioned in Arthur C. Clarke’s 3001:
> Their approach was more subtle; they persuaded their host machine to initiate a program which could not be completed before the end of the universe, or which - the Mandelbrot Maze was the deadliest example - involved a literally infinite series of steps.
https://archive.org/stream/SpaceOdyssey_819/3001_The_Final_O...
I loved that part of the book. I thought it was so cool how they thoroughly scrubbed the Internet of these dangerous programs, and then stored the very last copy on a disk in an ancient lava tube on the moon, just in case.
Collatz conjecture - and I would have said that even before we solved BB up until it reduces to collatz-like problems.
Numberphile just did a video on subcubic graph numbers which grows much, much faster than Tree numbers.
Do we know if they grow faster than busy beavers?
https://youtu.be/4-eXjTH6Mq4
No. The busy beaver function grows faster than any computable function.
What's more, for any N > 549, you cannot prove (in the usual universe of mathematics) that any computable number is equal to or larger than BB(N). (At least that's my understanding of the independence results (https://wiki.bbchallenge.org/wiki/Independence_from_ZFC))
The algorithm for BB(N) is (with some hand waving) “for each N-state Turing machine, check if it halts on an initially empty tape. If it does not, skip it. If it does, run it until it halts and count the number of steps, k. BB(N) is the maximum value of k you find.”
The problem is that the naive algorithm above requires an oracle for the Halting Problem, which every CS student learns is impossible in general for a computational model that isn’t more powerful than a Turing Machine.
So if a function can be expressed in terms of a Turing machine, busy beaver has to grow faster because eventually for some value of N, you can just encode the candidate function as a Turing machine and have Busy Beaver simulate it.
Busy Beaver hunters on the low end (trying to find the next number) have to exhaustively prove whether each Turing machine eventually halts or not at that number of states so that they can evaluate the longest running one with finite execution.
Busy Beaver is non computable and grows much faster than the various subcubic graph numbers (you could easily encode a subcubic graph number computation in a relatively small turing machine).
The busy beaver function is interesting precisely because you _can't_ come up with any computable function that grows faster.
TREE(3) is so crazy to me. You would expect it to either be a reasonable number, or to be infinite (i.e. there's an infinite sequence of such trees).
But no. It's just an _impossibly unfathomably large number_. Eventually, there's no more trees with less than $i$ vertices, but only at such a stupid incredibly large $i$ that it makes other "googology" numbers vanish by comparison.
> it makes other "googology" numbers vanish by comparison.
I consider TREE to be medium sized in the googology universe. It obliterates Graham's Number of course, as well as Goodstein sequence and simple hydras. But it's easily bested by an ingenious matrix generalization of hydras known as the Bashicu Matrix System, even when fixed at 2 rows. 3-row BMS goes far beyond that, let alone n-row. BMS is so simple that it can be encoded in under 50 bytes [1], where TREE would take significantly more. Beyond that there vastly greater numbers still such as Loader's Number. I shouldn't mention Rayo's because that's no longer computable in the usual sense.
[1] https://codegolf.stackexchange.com/questions/18028/largest-n...
There is a downvoted comment that reads "ah yes the totally new math of exponentiation". The snark is uncalled for, but that's actually the essence of this article: it talks about repeated exponentiation as if it were some profound mathematical discovery.
It isn't. The article neglects to explain what makes busy beaver numbers interesting in the first place. And I think it's symptomatic of Quanta Magazine articles that feature on HN several times a week. A profoundly-sounding title and pleasant writing, but not much essence beyond that.
Indeed, I would have at least liked to get a rough understanding of the tricks they use to classify and discard Turing machines, and how they construct these enormous runtime calculations. They are clearly computing something but they are obviously not actually running the machines normally for those numbers of steps.
but because that may actually be useful
you gotta pay for that. or dig through the academic publisher archives; which you also gotta pay for unless you believe in digital piracy and evil copyright infringement which may or may not fund terrorism
like they used to say: information wants to be expensive so pay to be free
I usually get drawn into their posts also and didn’t realize they were low value; my IQ must not be high enough to differentiate. What would you recommend as alternative sources?
I don't think there's anything that would put some new, esoteric math concept in your mailbox every week, although there's plenty of books that cover recreational mathematics in an accessible way (Martin Gardner, Ian Stewart, etc). And for QM articles, I recommend searching the web - you can often find better explanations on some 1999-style blog somewhere.
The problem with this particular article is simple: busy beavers numbers aren't interesting because they're big. They don't break mathematics because of that; you can always say "+1" to get a larger number. There's also nothing particularly notable about Knuth's up-arrow notation, which is essentially a novelty that you're never gonna use. Instead, the numbers are interesting because they have fairly mind-blowing interactions with the theory of computability.
Don't let specialists detract from your enjoyment of Quanta-level articles. This one is well-written, makes no egregious errors, and only omits one important fact. And that fact is ...
All this talk about exponentiation, tetration and pentation have their roots in Peano arithmetic, which for reasons of logical clarity, defines precisely one function -- the "successor function":
s(n) = n+1
Using just this function, Peano defines addition, multiplication and exponentiation with great clarity. Since Peano's time, people have been making adjustments, like including zero among the counting numbers, and by extending exponentiation into tetration, pentation and a few more exotic operations fully discussed in the linked article and elsewhere.
I personally would have liked to see a more complete exposition of the logical roots of the Busy Beaver challenge, and I think the missing parts would have made the article a better read, even for non-specialists. Maybe especially for those readers.
But Quanta articles are perfectly suited to their audience, people who may be inspired to look for more depth elsewhere.
There isn't really anything better. The Notices of the AMS has one feature article each issue, but those are sometimes too technical.
Anyway, this article seems fine to me. The "exponentiation" comment seems like a bizarre misreading. The article is just trying to explain how big BB(6) is. Before that, it explains what BB(n) is. To think it's solely about exponentiation you have to skip that entire section.
It's crazy to me that we're now writing articles about the fact that a large number is large.
Hey guess what, I can imagine a number even larger. It's BB(6) + 1, isn't that amazing and interesting? Wow imagine BB(6) multiplied by a googolplex, amazing. Wow, such number.
What's the point? Numbers are infinite, what else is new?
>What's the point?
What's the point of your comment? To suck the joy out of everything?
Turing machines are a fundamental object of study within the theory of computation. The complexity and wild behavior that arises from even the simplest machines is a cool discovery. BB(6) was thought to be a lot smaller, but it turns out to be really huge. The Busy Beaver game is also interesting to those who work on combinatorics and theorem provers. And of course many many people in the space of recreational math & volunteer computing love this challenge.
You don't like it? Ok, then. You don't have to.
I wonder, whether there's anything interesting on the tape after a BB(5/6) halts?
You can read about the BB5 winner here - it looks like the final tape state is 4,098 ones in a row:
https://bbchallenge.org/1RB1LC_1RC1RB_1RD0LE_1LA1LD_1RZ0LA?w...
https://wiki.bbchallenge.org/wiki/5-state_busy_beaver_winner
See also https://news.ycombinator.com/item?id=44406171
[flagged]
Ah yes the totally new math of exponentiation
Hmm, what a shame that the LaTeX content wasn't properly rendered in the linked article. When I noticed this I assumed someone had forgotten to include the MathJax JavaScript library, which, when present, converts LaTeX notation into properly rendered LaTeX client-side.
As it turns out, the MathJax library is called from the article's HTML (in line 1765), but for some reason it's not working.
A long-running debate argues that LaTeX rendering should be a default browser feature, so far without success. In the meantime MathJax works pretty well unless someone disables JavaScript client-side.
Reasonable people may differ, but not being able to render LaTeX is egregious and wrong.
Try reloading, the endpoint that loads MathJax might be a bit overloaded, but it does work.
Yes, you're right -- a reload cured it. I'm accustomed to the old Internet, where Web pages were smaller and page deliveries were more reliable.
At some point you gotta wonder if there’s even a difference between “stops after N” and “never stops”.
I mean obviously there is, it’s the same difference between N and infinity. But… is there really?
In a mathematical sense - absolutely. You can dual halting problem against many very tangible qualities - like whether a (proved) statement is true or false. A (large-n) halting program is closer to an instantly halting program not just because n is always closer to 0 than inf, but because 'large n halting' and 'instantly halting' are ontologically similar in a way they just aren't with unhalting programs.
Welcome to ultrafinitism!
Well since we got off the Gold standard, we might need those big numbers.
> In 1936, the pioneering logician Alan Turing proved that there’s no universal procedure for answering this question, which became known as the halting problem. Any method that works for some programs will fail for others, and in some cases, no method will work.
This is weirdly stated. The first sentence is correct. But trivially, either a machine that always says “yes” or a machine that always says “no” will give the correct answer for any given case.
Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means. But that’s going beyond the halting problem.
"this question" is Given the code of a computer program, can you tell whether it will eventually stop or run forever?
It is indeed the Halting problem. (Except that they forgot to state what the input is (the code itself).
The objection, which I agree with, is to the statement "Any method that works for some programs will fail for others, and in some cases, no method will work."
There is no case where no method whatsoever will work. It's true that for any method, there are cases where it fails but it's not true that there exist cases for which every method fails.
Okay, I see what you mean.
It really hinges on what is meant by "this question", i.e., Given the code of a computer program, can you tell whether it will eventually stop or run forever?
If what's given to you is the code and its input, then I think the statement's correct.
However, if the input is assumed to be either the code itself or any other fixed thing, then I agree with you.
I don't see how what's given to you changes things? If there's ambiguity, I think it's in whether the question is actually the halting problem:
If it is the halting problem — less ambiguously written as "given the code of a computer program and its input, will it run forever?" — then the statement is incorrect: there is a method that returns the correct result for every possible program and input.
If it is about proving whether a machine halts — not getting it right by chance, but showing that a particular machine halts or runs forever — then the statement is correct, for any set of axioms there are Turing machines that cannot be proven to halt or run forever using those axioms.
)
I think your quantifiers are in the wrong order.
Because you wrote "But trivially, either a machine...", I think you must think that the second sentence in your quote means "For every program p, there does not exist any method m that, when applied to p, determines whether p halts" -- which would clearly be false, since it allows every program to have its own method (and as you say, just 2 "constant-valued methods" suffice). But I interpret that second sentence in your quote as "There does not exist a single method m that, when applied to any program p, determines whether p halts", which is correct.
>Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means.
I also think that's what the article means, and I think it's a very nice way of putting it, though it's an interesting enough wrinkle to discuss further.
I think their issue is with the end of the sentence: "and in some cases, no method will work", which I interpret to say there are some programs for which no method can determine whether they will halt.
That's obviously incorrect; the program either halts or it does not. So if you had function that always returned true and another that always returned false, one of those functions would correctly tell you whether a given program will halt — for every program that could possibly exist.
The hard part is figuring out which function to use :)
A watch that just says PM correct some of the time.