Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Graham's Number Is Too Big to Tell How Big It Is (scientificamerican.com)
201 points by sjcsjc on Jan 26, 2018 | hide | past | favorite | 92 comments


Graham's Number used to be the biggest number I could name, but now I understand that TREE(3) ( https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem ) is bigger still by a truly incredible amount (and TREE(1) = 1, and TREE(2) = 3).

Unfortunately I don't understand that Wiki page.

The page links to https://en.wikipedia.org/wiki/Friedman%27s_SSCG_function , which is supposedly even bigger but I can't follow the text of the first page so won't attempt to read that.


I will take this opportunity to promote the YouTube channels of Brady Haran[0], Numberphile specifically. It has all sorts of interesting videos, including Graham's number [1] (one of the videos featuring Ron Graham) and TREE(3) [2].

His other channels are worth checking out too, Sixty Symbols and Deep Sky Videos being my favorites. To be clear, I have no affiliation with this, just a fan.

[0] http://bradyharan.com/

[1] https://www.youtube.com/watch?v=HX8bihEe3nA&list=PLt5AfwLFPx...

[2] https://www.youtube.com/watch?v=3P6DWAwwViU&t=54s


Numberphile is one of those series that makes me like math again. That strange, crazy, extreme logic is just great. Then I try to work with it, and I am made glad that I didn't pursue that career.

I especially like the intro to Graham's number and the rough proof that it is unthinkably large, that your head would literally implode into a black hole an uncountable amount of times over.

Math is so boggling. Numbers that are literally super-natural exist and are 'useful' at the same time, yet are impossible to mortals all the same. It's this weird end-around we manage with math/logic. That 'floor falling out from underneath you' feeling, a happy giddiness and smallness.

Sorry, Graham's Number does something fierce to my head, it seems.


TREE(3) is indistinguishable from zero compared to Loader's number[1], which basically takes every bit pattern up to some n and expresses this as a program in the Calculus of Constructions. This system is a bit weaker than being Turing complete, but the programs do always terminate (this makes the number computable compared with, say, the Busy Beaver number which does a similar thing with Turing complete programs).

It also has the geek cred of being represented by an obfuscated C program (the unobfuscated verson is also available[2]).

[1] http://googology.wikia.com/wiki/Loader%27s_number

[2] https://github.com/rcls/busy


I can trivially define a function that's vastly larger than that. What's interesting is when these numbers are useful for something.

k(1) = TREE(1), K(2) = TREE(TREE(2)), K(3) = TREE(TREE( <(repeats (TREE(3) time (TREE(3)))

Sequence 1, 2, non comprehensibly large number.

Now define m(x) in terms of k and m(x-1).


> k(1) = TREE(1), K(2) = TREE(TREE(2)), K(3) = TREE(TREE( <(repeats (TREE(3) time (TREE(3)))

What you've made here is what's known in googology circles as a 'salad number'. It doesn't make a conceptual leap above what the TREE function can do anyway and is kind of similar to saying TREE(3) + 1.

SCG(13) is much bigger than your K(3) and Loader's number is much bigger than SCG(13). There are (computable) numbers bigger than Loader's number which are mentioned in the googology link in my previous post (Greedy clicque sequences and finite promise games).


Yea, the point was to pick something trivial. K(3) is larger than TREE( any obvious function to a non mathematician of any printable number).

But it's the m(x) that's actually large in that example.


> But it's the m(x) that's actually large in that example.

Sure, it's large (mindblowingly so), although I'm not quite clear exactly how you define m(x).

It's hard to get a grasp on just how much vastly larger each of these 'rockstar' numbers is to the previous one when you ascend this conceptual hierachy. (To a certain extent, that's the point of the article.)

You could make a K(K(K(...(n)...) function where you painted a K on each particle in the visible universe (about 10^80) and it still wouldn't come close to Loader's number. The point is that you can't simply recurse TREE and hope to get a number that's conceptually any bigger than TREE itself.


Yea, at some point you really can't tell if function 1 grows faster than function 2. Anyway to pick a large m(x).

m(0) = 1; m(x) is the recursion of all finite functions defined using k(m(x-1)) symbols that produce larger numbers than their input defined at stage m(x-1).

So a 1 step function F1(x) = X + X and F2(x) = X ^ X,... including any function that has been included in a mathematical paper to this point that takes one ore more finite numbers as an input.

And because these are all finite functions call them recursively in whatever order produces the largest output starting with the initial input (x). Now, to size this depends on what initial notations are allowed, so m(1) is a finite number.

Next level could then include F#(x) where F1(F2(x)) etc.

PS: The above is roughly equivalent to saying infinity +1 and is not interesting IMO. I can also drop the k from the above, but again the point is to use the dumbest approach that works.


Sounds a bit like Yudkowsky's number[1] (your main step is the first one, everything else is salad), which is a computable version of Rayo's number[2])

[1] http://googology.wikia.com/wiki/Yudkowsky%27s_Number

[2] https://en.wikipedia.org/wiki/Rayo%27s_number


Sure, but you need a large seed number at some point for functions like this. ex: Rayo's used 10^100.

So, they are all going to be take large value and use it to boost some other function m(x). I just happen to pick a large f(x).


I think the distinction is that Graham's Number is a "useful" number; i.e. it was used to help answer a problem. As you note, G + 1 is larger than G but G + 1 has (so far) no known significance.

BTW, I totally geeked out reading about Graham's Number. Thanks to the OP!


As the article states, it wasn't actually used in the proof where it's credited. Graham later cited it because it was easier to describe than his actual upper bounds.

I'm a big fan of the busy beaver family of problems and the halting problem is particularly interesting.

Naturally, the definition of the numbers and thought process used to describe them is orders of magnitude more interesting than the numbers themselves.


I've wondered about this. It's interesting that no matter how powerful the function, this sort of thing can always be done.


yeah but +1 is interesting if applied to the argument, not the function. TREE(3+1) instead of TREE(3)+1. In essence, +1 is all you have to work with if you want to stay in a sound logic, the question is just where to apply it. The logical conclusion would be op[TREE+1](3) in array notation.


Ah, I was wondering where the Busy Beaver idea came into this. The computability thing makes sense, ie. we can define a number using Busy Beavers but we can't ever really even reason about how big it is.


I don't think that's a helpful way to think about the Busy Beaver or non-computability. We know S(2,2) is exactly six, even though the function S is non-computable.


unless we're able to simply try all the required Turing machines.


The tricky bit isn't running the ones that halt and noticing when, but reasoning about all the ones that _don't_ halt in order to convince ourselves that they really never halt. The larger and more complex the machine, the harder it may be for a human to reason about it in this way.

For example, suppose it's 1985 and you are examining a machine which seems to be checking all integers X above 204 to see if it can find a^X + b^X = c^X. Does it halt? Unfortunately the only way to answer that question is to wait until 1996 when a proof is published which says Fermat was right, there are no values of X for which this is true. So the machine never halts.

[Edited to try to get HN to format the question properly]


recall this classic:

https://blog.regehr.org/archives/140

C Compilers Disprove Fermat’s Last Theorem


That SSCG function is pretty astonishing:

> SSCG(3) is not only larger than TREE(3), it is much, much larger than TREE(TREE(…TREE(3)…)) where the total nesting depth of the formula is TREE(3) levels of the TREE function.

notwithstanding that TREE(3) is pretty crazy in itself:

> an extremely weak lower bound for TREE(3), is A(A(...A(1)...)), where the number of A's is A(187196), and A() is a version of Ackermann's function. Graham's number, for example, is approximately A^64(4)


Obligatory: https://www.scottaaronson.com/writings/bignumbers.html

(Yes, I know this gets posted every time one of these topics comes up, but there's always someone new who hasn't read it yet.)


I assumed your link was going to be this one about a "graspable" description of the number 52! (factorial).

https://czep.net/weblog/52cards.html

Start a timer that will count down the number of seconds from 52! to 0. Then walk around the world along the equator, at the very leisurely pace of one step every billion years. The equatorial circumference of the Earth is 40,075,017 meters. After you complete your round the world trip, remove one drop of water from the Pacific Ocean. Now do the same thing again: walk around the world at one billion years per step, removing one drop of water from the Pacific Ocean each time you circle the globe. The Pacific Ocean contains 707.6 million cubic kilometers of water. Continue until the ocean is empty. When it is, take one sheet of paper and place it flat on the ground. Now, fill the ocean back up and start the entire process all over again, adding a sheet of paper to the stack each time you’ve emptied the ocean.

Do this until the stack of paper reaches from the Earth to the Sun. Take a glance at the timer, you will see that the three left-most digits haven’t even changed.


Old-school Buddhism uses lots of really color metaphors like this to describe the length of time people spend in some of the Hell realms. :)


'A Bunch of Rocks' deserves a mention here I think


This text. It's really very good in allowing one to "feel" such a large number, thanx for sharing.


this reminds me of the Built to Spill song "Randy described Eternity":

>Every thousand years This metal sphere Ten times the size of Jupiter Floats just a few yards past the earth You climb on your roof And take a swipe at it With a single feather Hit it once every thousand years 'Til you've worn it down To the size of a pea Yeah, I'd say that's a long time But it's only half a blink In the place you're gonna be


Just to snark a little - that model assumes a constant earth. After a billion years (the first step) there will be no water left in the pacific anyway. If there's even a pacific ocean by then due to plate tectonics...

Sorry.


Where do people think they're going to put the water when it's not in the ocean, anyway? In space? It would get the paper stack wet so this is clearly nonsense.


The page being quoted points that out:

> Of course, in reality none of this could ever happen. Sorry to break it to you. The truth is, the Pacific Ocean will boil off as the Sun becomes a red giant before you could even take your fifth step in your first trek around the world. Somewhat more of an obstacle, however, is the fact that all the stars in the universe will eventually burn out leaving space a dark, ever-expanding void inhabited by a few scattered elementary particles drifting a tiny fraction of a degree above absolute zero. The exact details are still a bit fuzzy, but according to some reckonings of The Reckoning, all this could happen before you would've had a chance to reduce the vast Pacific by the amount of a few backyard swimming pools.


Well in which case I apologise :)


[leans against light post, flicks cigarette]

"Sure, Graham's Number is okay. ... If you like your big numbers computable."


The ultimate guide to big numbers:

https://mrob.com/pub/math/largenum.html


I'm someone new who hadn't read it yet. I absolutely loved it. Thanks for sharing!



I did not know it, thanks for sharing. This is a very interesting read.

"And in Go, which has a 19-by-19 board and over 10150 possible positions, even an amateur human can still rout the world’s top-ranked computer programs." -- that needs to be updated, though.

Completely aside, I do not when this article is from but the update I mention above shows how we have changed in the way we approach some problems (brute force vs. ML)


There was a big number contest at MIT: http://web.mit.edu/arayo/www/bignums.html


(9^9^9^9)/(0.1^9^9^9^9)


David Metzler's series "Ridiculously Huge Numbers"[1] is a good introduction to this. It goes rather well beyond puny little integers like Graham's number.

Loader's number has already been mentioned here, but I don't see any mention of the Busy Beaver function[2]. BB(n) is the maximum number of 1s that can be written of a 2-color, n-state halting Turing machine starting from a blank tape and counted after the machine halts. This is of course uncomputable, and grows very quickly. Rayo's function (Rayo(n) = the smallest natural number greater than all natural numbers that can be uniquely identified by a first order set theory expression of at most n symbols) grows even faster, but that requires a non-recursively enumerable set theory or the use of non-provable functions.

[1] https://www.youtube.com/watch?v=QXliQvd1vW0&list=PL3A50BB9C3... [2] http://googology.wikia.com/wiki/Busy_Beaver


> David Metzler's series "Ridiculously Huge Numbers"[1] is a good introduction to this. It goes rather well beyond puny little integers like Graham's number.

Yes, David Metzler's series is great and, I would say, essential viewing if your interest in large numbers is piqued.

While he doesn't go through some of the big numbers talked about her (TREE(3), SCG(13), etc), he gives an excellent explanation of the fast-growing hierarchy. This is really the closest thing we have to a 'standardized' way of comparing the sizes of very large numbers.

I would also recommend Giroux Studios' 'Extremely Large Numbers'[1], which takes a similar style to Metzler's videos and goes further with the fast-growing hierarchy. (Does something called 'infinite collapsing functions' sound interesting...?)

[1] https://www.youtube.com/watch?v=vq2BxAJZ4Tc&list=PLUZ0A4xAf7...


Metzler's series got expanded a while back, he does have some sections on TREE (though not on SCG, SSCG, etc). IIRC there's a brief bit about ordinal collapsing functions as well, but not very detailed.


Let me propose YOUTUBE(n) which represents the number of YouTube videos about the mathematics of large numbers.


A while back someone on math stackexchange asked “Graham's Number: why so big?” and I wrote what I think is a pretty good explanation.

If you're interested, it's at https://math.stackexchange.com/q/163423/25554


Superb.


Somewhat more visually satisfying explanation:

https://waitbutwhy.com/2014/11/1000000-grahams-number.html


Man I really wonder what your brain is doing when you try to comprehend those numbers.

> thinking about Graham’s number has actually made me feel a little bit calmer about death


Tim Urban of Wait But Why is a national treasure. One of his best-ever articles.


Thanks for that link. I found this clicked much more easily in my brain.


"But Graham's number is different. I can't tell you how many digits it has. I also can't tell you how many digits its number of digits has, or how many digits the number of digits of its number of digits has. They're all too big."


This is really mild in comparison to the number. You can't tell how many times after repeating that you can tell the number of digits in it.


Thinking about Graham's Number reminds me of this quote from CS Lewis's That Hideous Strength

"Saturn, whose name in the heavens is Lurga, stood in the Blue Room. His spirit lay upon the house, or even on the whole earth, with a cold pressure such as might flatten the very orb of Tellus to a wafer. Matched against the lead-like burden of his antiquity, the other gods themselves perhaps felt young and ephemeral. It was a mountain of centuries sloping up from the highest antiquity we can conceive, up and up like a mountain whose summit never comes into sight, not to eternity where the thought can rest, but into more and still more time, into freezing wastes and silence of unnameable numbers. "


This raises the question of what it means to "tell someone how big it is". Does it mean to write it using some particular notation (such as positional notation or exponentiation)? Does it mean to instill a satisfactorily good image in someone's head of how big it is? I mean, mathematically speaking, any definition of a number is as good as any other, and Graham's number can be defined perfectly well.


Don't know about you, but I like this explanation of Graham's number:

https://youtu.be/1N6cOC2P8fQ


Conway chained arrow notation [0] is really mind blowing. The simple notation of “3 -> 3 -> 3 -> 3” results in a number that is larger than Graham’s number.

[0] https://en.wikipedia.org/wiki/Conway_chained_arrow_notation


Numberphile also did a great video on TREE(3) here (a 'useful' number larger than Graham's): https://www.youtube.com/watch?v=3P6DWAwwViU


There was a puzzle in the MIT mystery hunt a couple years back that involved sorting large numbers. Graham's number was somewhere in the middle.

http://web.mit.edu/puzzle/www/2016/puzzle/identify_sort_inde...


"And in Go, which has a 19-by-19 board and over 10150 possible positions, even an amateur human can still rout the world’s top-ranked computer programs."

When was this written? Evidently before 2017.


> By Evelyn Lamb on April 1, 2014


A bit of a tangent, but it blows my mind how quickly Go went from “computers are so terrible at it that they can’t even beat a decent amateur” to “computers utterly crush the world champion.”


I really don't like this kind of writing, it spending too much time gawking at the number and little explaining.

I would rather recommend this video of Graham himself explaing Graham's Number. https://www.youtube.com/watch?v=HX8bihEe3nA


My children loved to invoke googol, like most children it seems. "I want to eat googol cookies!" But I always sternly warned them it was a big number—most of their invocations pretty much entail the observable universe immediately collapsing into a black hole. Then of course they caught wind of googolplex, which is just nuts, and I keep telling them "that's nuts!" Now they're onto Graham's number, and I've run out of tools to convey how insane it is. I'm relegated to saying things like "Remember googolplex? Well that's peanuts compared to Graham's number." I don't think I'm getting through to them.


The linked Numberphile video mentions that a black hole the size of a human head wouldn't have enough entropy to encode Graham's number. So what diameter of black hole would actually be required?


Well, you could plug Graham's number into the Bekenstein–Hawking formula[1]. The formula defines the area and there's a factor of the plank length in there, but Graham's number is so overwhelming that the the result is pretty much Graham's number in any units you want.

[1] https://en.wikipedia.org/wiki/Black-hole_thermodynamics


So vastly large that the difference between a plank volume and the observable universe is irrelevant. IIUC, even so large that the difference — how much you’d need to scale the universe up by — is itself literally indescribable as a number which can be encoded on a universe-sized black hole.


Reminds me of Chaitin's constant:

> In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number)[1] or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt.

> Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Indeed, each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits.

~ https://en.wikipedia.org/wiki/Chaitin%27s_constant


Are there even a Graham's Number of things we could count in the observable universe?


Definitely no. Overwhelmingly and absurdly no.

Think about it this way.

You can count by just adding. |||| is four. When you're counting small things, that works fine.

Then you have multiplication. Instead of writing |||||||||||||||||||| we just write 20. We developed a new abstraction to write large numbers easier.

Then you have exponentiation: 10000000000 is just 10^10. This is abstraction is all the further you need to go to talk about the number of atoms in the Universe (which is roughly 10^80), the speed of computers (And exaflop is 10^18 operations per second, there are 10^7 to 10^8 seconds in a year, and the universe is 10^10 years old, so if each atom were an exaflop computer, you could do 10^(80+18+8+10) = 10^116 operations. Easily written in scientific notation. This is really all the level of abstraction needed for roughly writing large numbers in science and technology. But not for math. Certain proofs can require absurdly larger numbers.

So what if we abstracted exponentiation? So, instead of writing 10^10^10 (which is 10^10000000000), we write, say, 10^^3. And we can keep doing that. But what if we iterate on the number of abstractions? So the number of "^"s is abstracted, i.e. 10 ^{m} 10. and THAT is abstracted n times (i.e. we replace m with the previous 10 ^{m} 10, n times). Graham's number is n=64.

No doubt I probably made a mistake in here... But:

tl;dr: no, we need to develop totally new abstractions to represent Graham's number. Numbers which can be merely represented directly in our universe are easily written in exponential format.


One of the best professors I ever had, when talking about intractability, would always begin his lectures by pointing out that there are only about 10^80 elementary particles in the universe.

He has a small essay online about the subject: https://www.utdallas.edu/~tfarage/MyPapers/TheBestComputerEv...


No. Even if you wrote a single digit in each planck volume of the observable universe, you'd run out of space.

It is safe to say that there are far fewer than a Graham's Number of things in here with us.


It's not too big to comprehend. It's a number, which if you could keep all digits in your mind, would cause your head to collapse into a black hole several times larger than observable universe.


It's a bit of a tangent, but if you find it pleasing (stirring might be a better word) to contemplate large numbers like this, and are curious about why that may be so, you may be interested in Kant's reflections on the aesthetic appreciation of the sublime, see https://en.wikipedia.org/wiki/Sublime_(philosophy)#Immanuel_... for an intro.



Since we're talking about mind-boggling quantities, although this is more related to cardinals it's one of the most fascinating pieces I've ever read.

https://johncarlosbaez.wordpress.com/2016/06/29/large-counta...


I just read all three posts. Very silly, very entertaining. I remember seeing that "clock diagram" of ordinals years ago, and wondering what happens if you take it further. Now I know!


Is just the elegance in they way it outputs the large number. I could easily do an expontianial of an exponential of exponential repeatedly exponentially and that would be hard to imagine, but it would look hacky


So here's a question that popped into my mind: Is it possible to define two computable functions, f() and g(), such that the truth or falsity of f(x) < g(y) is not computable?


No, could be proven by the transitivity of computability under composition.

Provided the function <(,) is computable for all typeof(f) and typeof(g).


Interesting, thanks. So it should be possible to define f() and g() such that neither of those are computable, but the truth of f(x) < g(x) is computable, but not so much for f(x) < g(y). Could one construct a couple of monotonically increasing uncomputable functions with that known relationship?


>So it should be possible to define f() and g() such that neither of those are computable, but the truth of f(x) < g(x) is computable, but not so much for f(x) < g(y).

The difficulty is really just defining an uncomputable function that has a definite value for some domain, but this can be done. Let's call one h.

f(x): -abs(h(x)) g(x): -f(x)

Gives your second inequality.


A good example of mathematical representation as compression.


I guess it's all in the eye of the beholder. For instance, Erdos would say that, for him, the Graham number is just 1.


I just watched a video on how this number is derived and now my head hurts.


This needs a [video] tag.


I love articles like this, they make science so approachable.


Should these be called numbers or rather just processes which (in principle) arrive at numbers?


>Should these be called numbers or rather just processes which arrive at numbers?

They are one and the same. You can use a Graham's number place value system and Graham's number is simply written while a number like 7 might be quite complex to represent.


If a number and the process used to arrive at a number were completely equivalent, then going through the process to arrive at the number would never be worthwhile. Yet people are often interested in actually going through these processes to generate numbers, when it's practical to do so.


That's because they want to represent the number in a particular representation.

They are not "completely" equivalent, they are equivalent up to the representation, converting between representations can have very high computational costs.

But equivalent up to the representation is the usual notion of equality in mathematics, and if you mean something else you have to define it.

(Obviously in computer science, this difference is quite important and we need to differentiate between an expression and it's evaluation)


There still seems to be a difference in kind between a number represented in such a way that no further processing is needed, versus represented in a way in which further processing is needed.

To me, such a fundamental difference seems to deserve a distinction in nomenclature.


That's a distinction in goals though, not in concepts. If we want to know how many people are on a bus "the number of people you would count if you bothered" is a perfectly true answer. It's an accurate representation. You perhaps can't do anything more than talk vaguely about abstract bus populations with it, but that is a reasonable thing to do.

On the other hand, maybe "17" is the answer you want, because you need to compare it to some other integer or plug it into a function. Also useful things to do, but not always. And not even more often than the trivial alternatives.

Even though we're significantly biased toward wanting the latter form, it still means the same thing as the former. And the former is a description of a process, while the latter is a number. Or the former is a number, while the latter is a particular encoding of a process. Same difference in the end. We even define numbers by describing the processes we use to generate them.


FOOT


[flagged]


This very much violates the policy of the site. Please don’t do this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: