13 December, 2012

Review: M 33 in Andromeda

M 33 in Andromeda M 33 in Andromeda by A.E. van Vogt
My rating: 3 of 5 stars

This is a great collection of soft science fiction stories from a classic author. Well worth the read, though none of the stories in this collection jumped out at me as being particularly worthy of praise. Good consistent quality stories like this rare to find, so even though none of these are on my favorites list, I'm still glad that I took the time to read this book.

View all my reviews

06 December, 2012

Really Big Numbers

EDIT: Portions of this article when first published were factually inaccurate. These errors have since been corrected using strikethrough text. A 2018 follow-up post is also available.

Occasionally, when I'm talking to younger people and they find out that I'm mathematically literate, I'll get a question of the form: "What's the biggest number you can name?"

It's a somewhat difficult question to answer, because it really depends on what counts as naming and what counts as a number. After all, am I allowed to use a function to identify a number? Does the number have to be finite? Must it be useful in some sense? Should I include transfinite numbers? Or do they just want the highest number that has a common english name?

Since there are different answers depending on what the questioner thinks is acceptable, I'll try to list a few of the standard answers I give to these younger questioners here, starting from the smallest and work my way up. If I've missed any natural answers to the question, please comment with your suggestion, so I can add it to my repertoire.

Common English Names


The first few counting numbers have names that we commonly use in English, e.g., ten, hundred, thousand, million, billion, trillion, quadrillion, quintillion, sextillion, septillion, octillion, nonillion, &c. Some of these get obscure, but they're nevertheless understandable. One of the largest of these in fact has a famous search engine named after it: a googol is equivalent to 10^100. Google's headquarters is also named after an even larger such number; the Googleplex near San Jose, California, is named after the googolplex, which is 10^(googol). In between these two values is the centillion at 10^303, which is about as large as you can get while using the "-illion" suffix. (You can artificially construct larger terms, like the uncentillion, but this is just a matter of inserting additional prefixes.)

Of note here should be Archimedes' system of estimating the number of grains of sand that could fill the universe at 10^63 by using a unit called the myriad, which is equal to 10,000. Archimedes did this by calling a myriad of myriads the first unit, and then taking a myriad myriad of myriad myriad units to create the second unit, and so on for eight times, saying this would be sufficient to fill the universe with grains of sand.

Mathematical Notation


Of course, there are many numbers that are much larger than these, but they generally do not have common english names. Rather, we write them out using scientific notation or by some other mathematical notation. The googolplex above is only 10^(10^100); if you want something bigger, you need only write out another exponent. But even if you repeatedly take exponents of exponents all day long, you won't get anywhere near where you can get if you instead use a more efficient notation system.

Mathematical operations build upon themselves recursively, as any schoolchild learns when they begin to see the relationship between addition and multiplication. Exponentiation is the next logical step after multiplication, but there are further steps one can take. The next method up is called tetration, and it involves repeated exponentiation.

The addition of A+N is when you take the number A and succeed it N times. The multiplication of A*N (or A×N) is when you take A and add it to itself N times. The exponentiation of A^N (or AN) is when you take A and multiple it to itself N times. The tetration of NA is when you take A and exponentiate it by itself N times. I'm sure you can see the pattern here.

NA is A^A^…^A^A where there are N As in the sequence. This results is very large numbers. 10010 is much bigger than 10100, for example. But some notations go far beyond mere tetration.

One such way of notating large numbers is by using Knuth's up-arrow notation. Rather than use superscript, up arrows are used to indicate which type of operation is being used. Exponentiation is written out as A^N (or A↑N) instead of AN. Tetration is written as A^^N (or A⇈N or A↑2N) instead of NA. And the next step in the series follows the same rule: pentation is written as A^^^N (or A↑↑↑N or A↑3N), which is just taking A and tetrating it N times. Similarly, hexation is A^^^^N (or A⇈⇈N or A↑4N), which is taking A and pentating it N times. And so on.

It is perhaps a little difficult to fully understand the scale of these numbers. Even the innocuous looking 3^^^3 is much, much, much bigger than a googolplex:


Add another arrow or two in there and the number becomes so large that trying to just think about it boggles the mind. Usually, when I want to name a very large finite number, I will go with 3^^^^3. It's far bigger than almost anything that we deal with in reality, and so effectively works as a stand in for "extraordinarily large finite number".

However, it's not even close to the biggest that some mathematicians work with.

Graham's number refers to the number of dimensions a hypercube must have in order to satisfy some esoteric conditions in Ramsey theory. The number is so large that the number of up-arrows you'd need to display it is itself amazingly huge. Here's a lower bound of Graham's number: Here's one upper bound of Graham's number that isn't even the lowest known upper bound:



In the above equation, notice that the number of up arrows in the first line is indicated by the number in the line below it. My "extraordinarily large finite number" that I commonly use, 3^^^^3, is merely the number of up arrows used in the sequence directly above it. As you can see, Graham's number is huge.

Fast Growing Functions


And yet, Graham's number is not nearly as big as some. If functions are allowable in our answer, then much bigger numbers can be shown in a small amount of space. Unfortunately, many people seem to think that saying f(x) where the function is defined as X+1 is not the same thing as naming a number. If I say f(3), we all know that what I'm pointing at is 4, but perhaps we might say that I only pointed at it and yet did not name it. However, there is philosophical debate on this issue; after all, what more is there to naming than pointing out which thing we mean? In a very real sense, f(3) can be said to be yet another name for the number 4 here. If you agree with this logic, then we can easily construct much bigger numbers than even Graham's number by merely using functions that grow much faster than even nested up arrow notation can handle.


Take the busy beaver function. It refers to the maximum number of steps performed by turing machines of a certain class, and has been mathematically proven to grow faster than any computable function, though there might non-computable functions that grow even faster. It starts out slow, with Σ(12) being only 3^^^^3, the bottom line of Graham's number. But it grows fast, where Σ(2k) > 3k-23 > A(k-2,k-2) (k≥2), and A() is Ackermann's function: A(m,n)=2↑m-2(n+3)-3.

But if you think the busy beaver function grows quickly, then consider the TREE(x) function. It was created to deal with trees in set theory and grows much faster than most people can appreciate. TREE(3)=A(A(...A(1)...)), where the number of As is A(187196), and A(x) is a version of Ackermann's function: A(x) = 2↑x-1x. Compare to Graham's number, which is just A64(4) — a much smaller number than the established lower bound of TREE(3) at AA(187196)(1). If you want to a compact way of pointing out an absurdly large finite number, consider using TREE(100). The only way I know of to reliably get higher ordinals is to put in bigger numbers inside the function, and god help you if you start iterating the function inside itself.


Take the TREE(x) function. It was created to deal with trees in set theory and grows much faster than most people can appreciate. TREE(3)=A(A(...A(1)...)), where the number of As is A(187196), and A(x) is a version of Ackermann's function: A(x) = 2↑x-1x. Compare to Graham's number, which is just A64(4) — a much smaller number than the established lower bound of TREE(3) at AA(187196)(1). If you want to a compact way of pointing out an absurdly large finite number, consider using TREE(100).

But if you think the TREE() function grows quickly, then consider the busy beaver function. It refers to the maximum number of steps performed by turing machines of a certain class, and has been mathematically proven to grow faster than any computable function, though there might non-computable functions that grow even faster. It starts out slow, with Σ(12) being only 3^^^^3, the bottom line of Graham's number. But it grows fast, where Σ(2k) is


and A() is Ackermann's function: A(m,n)=2↑m-2(n+3)-3. The only way I know of to reliably get higher ordinals is to put in bigger numbers inside the function, and god help you if you start iterating the function inside itself.

Edit: This section was originally inverted, and incorrectly portrayed the TREE() function as faster growing than the Busy Beaver function.

Infinity and Beyond


Of course, all the above numbers are finite, so there is at least one number that beats them all: ∞. Yet the infinity that most people think of when they imagine "the infinite" is merely א0, the first and lowest transfinite cardinal number. These are countably infinite sets, like the set of all positive integers. Yet this comparatively small infinity dwarfs in comparison to א1, the uncountable set which might correspond to the cardinality of real numbers. (This the continuum hypothesis, and it only works if 2^(א0)=א1, which is far from clear.)

But cardinality is not the only kind of infinity; there are also ordinal infinities. The lowest transfinite ordinal number is ω, which corresponds to the order type of the natural numbers. We can then consider the אa where a=ω. This makes אω, which is the lowest uncountable cardinal number that is not the same as the cardinality of the real numbers. the smallest uncountable cardinal provably in ZFC that is not the cardinality of ℜ. (The cardinality of the real numbers might be א1, which is smaller, or אω+1, which is bigger (or many other cardinals).) Don't worry if this is starting to make you dizzy; it does the same to me, too.

However, this is just the low end of the scale. At the high end, things really get deep. Dedekind looked at the system of all ordinal numbers, calling it Ω. Wondering if he could get even higher, he proposed the sequence:

{0, 1, 2, 3, … ω0, ω0+1, …, γ, …}

where each γ in the sequence goes on to describe the order type of all preceding elements. But this turns out to be inconsistent, since if it were consistent, then there would be some number that corresponds to its order, and that number would have to be higher than any of the numbers in the above sequence. But since this is a number, it must be in Ω to begin with, since Ω has all numbers in it. But a number cannot be greater than itself, and so this new proposed set by Dedekind cannot exist. This means that Ω is definitely the absolute infinity of its class.

Does this mean that Ω is the winner here? Can we get any bigger? The answer, once again, is it depends. Woodin's Ultimate L, LΩ, is the ultimate enlargement of Gödel's constructible universe L, which is itself the proper class of all constructible pure sets. Here, L refers to the standard inner model universe of the inner model theory ZFC+V=L, where ZFC is Zermelo-Fraenkel set theory and V is the Von Neumann Universe stationary club (closed unbounded class) of all well-founded pure sets.

That's a mouthful, but does it mean that is LΩ "higher" than Ω? I don't know that it even makes much sense to ask the question, because they're talking about different things. Yet they're both "numbers", so maybe that means they can be compared anyway? To be honest, I'm not entirely sure. When it comes to infinities any bigger than א0, my brain implodes. The previous paragraph, in fact, was written with only a vague understanding of what I've read elsewhere — it's more regurgitation than true understanding on my part.

Conclusion


After all this, I suppose the biggest entity I feel comfortable talking about is Ω, especially when it comes to trying to answer unexpected questions from younger people. Concepts like Hausdorff's absolute continuum cΩ or Woodin's Ultimate LΩ describe situations that may or may not correspond to אω or Ω or whatever. For all I know, everything from אω and up are all equivalently large entities. At some point I may bother trying to understand this stuff more in depth, but until then I think I'll just stick with the large finite numbers and Ω if they want to know how big ∞ can truly get.

EDIT: A few mathematicians expressed concern that some of the mathematical sentences I originally presented in this essay were factually incorrect. These errors have since been corrected, and previous text has been left in using strikethrough text to illustrate the errors I originally ran into as a layperson attempting to write a mathematical article. Thank you to professor Edgar Bering and grad students Bo Waggoner and Charlie Cunningham for finding and correcting my errors.