“Math is the queen of the sciences, and number theory is the queen of mathematics.” -Gauss1
“Big Number Theory is just like number theory, only bigger.” -me
Back in college, I skimmed the landmark paper, “More is Different”, by P.W. Anderson.2 Later on, I found a mutual on Twitter who had the title of the paper as his Twitter handle.3 This made me think maybe I should actually read the paper.
But even before reading the paper, I already had a theory I wanted to explore. What if there were certain properties that numbers had, but only once they grew to a certain size? In particular, the intuition was just based off of emergence, the idea that if you put enough sand in one place, you now have a pile. Once numbers became a certain size, maybe there was a structure contained within the number.
MATH
So, imagine a property, P, that all numbers, n, had, once they were greater than a certain threshold, N. If we say that the function f takes a number n, and outputs 1 if the number has property P, we’re really saying the following:4
Unfortunately, I haven’t found many applications of this idea yet.
There’s the trivial property, like saying all numbers greater than 100 are greater than 100.
There’s something somewhat related with the concept of uncountability, I think. The idea that a set of numbers can be uncountably infinite necessarily presupposes the size is infinite already. But you’re already sort of breaking the concept of size at that point, and I don’t really believe in infinity anyway5, so that’s not what I want.
There are some analogous ideas in statistics, like the law of large numbers. The idea that as your number of samples increases, you converge on the true average, or true representation of likelihood.
The difference between this and my idea is that technically, the law doesn’t apply to every measurement, only the average of the results of independent measurements. And technically, there’s nothing impossible about flipping a coin a million times in a row and getting heads every time. This is probably the closest real analogy to the actual paper, I have an intuition that a lot of macroscopic physics stuff is due to statistical effects, I don’t actually know physics though, so this is shaky knowledge.6
The closest branch of math I have found has been Ramsey Theory. This is the branch of math that is obsessed with the alternate future where Frank Ramsey doesn’t die in his 20s.
Just kidding that’s Galois theory, it’s weird but Ramsey theory is actually about Galois.7
Okay so Ramsey theory is a branch of combinatorics. Combinatorics is just about counting stuff. Copying from Wikipedia, 'problems in Ramsey theory typically ask a question of the form: "how big must some structure be to guarantee that a particular property holds?"' Would you look at that, that’s almost exactly the type of question I’m asking! Naturally, I’ve learned very little Ramsey theory8, instead, I’ve just meditated on the words “number,” “structure”, and “size”, all bouncing around in my head. Weirdly, this didn’t allow me to reach any key breakthroughs. I suppose I just need to do it more, that will make it different.
To further illustrate the connection between Ramsey theory and what I’m trying to study, let’s look at an example. The most prototypical Ramsey theory type problem is as follows: Imagine a collection of dots, which are either colored red or blue. Draw a line from each dot to every other dot. How many dots do you need before you are guaranteed that you’ve drawn a triangle where all the corners are the same color? The answer is 6, so if you have 6 or more dots, you are guaranteed, no matter the coloring you choose, to have at least one all-red triangle, or one all-blue triangle.
In the language of our original formulation, what this means is our property P is as follows: Does a complete graph with n points, where each point is one of two colors, always contain a connected subgraph of size 3 where all points of the subgraph are the same color?9
If our function f returns 1 if P is fulfilled, we have the following:
Clearly Ramsey Theory has some examples of nontrivial properties that apply to all numbers past a certain size. More exploration of the field is warranted.
PAPER “SUMMARY”
If more is different, then different is more.10 I were to try to do something different, since perhaps that will lead to more, it might behoove me to actually read the paper. It’s extremely embarrassing that I didn’t remember any details of the paper before writing this post, given that the paper is only 4 pages. The main thrust is that, get this: ‘more’ “is” ““different””. You’re welcome, congrats on your newfound enlightenment. Anderson writes that even though we’re all reductionists nowadays, reductionism doesn’t imply constructivism. That is, “the ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.”
In particular, Anderson notes that this “constructionist hypothesis” appears to break down when confronted with scale and complexity. You’ll note that in this blog post, we’re really only focused on scale. If we restrict our focus to numbers, and not anything more complicated, it’s tough to say we’re really dealing with much complexity. It’s possible that this restriction means our efforts won’t produce much, it’s also possible that math doesn’t even have any concepts complex enough to rival molecules.11
With those remarks out of the way, what else does Anderson say? Unfortunately for us, Anderson is a physicist, which might as well mean he’s some sort of sociologist or other 4-letter word from our perspective as a mathematician. Standing on the fuzzy ground of “material reality”, he unfortunately makes a lot of assumptions that we will have to ignore or adapt into our own methods.
He gives an analysis on the ammonia molecule which is so riveting that I refuse to explain it.12
From this case study on ammonia, he claims that at least 3 inferences can be drawn:
Symmetry is of great importance in physics.
The internal structure of a piece of matter need not be symmetrical even if the total state of it is.
The state of a really big system does not need to have the symmetry of the laws which govern it; it usually has less symmetry.
Taking his axioms for granted, what does this mean for our battle? How can we take the products of someone else’s hard-fought thoughts and torture them to get something useful for us? Divorce something from its context, take it from its environment, and it might die without the support. But let’s see if we can salvage the dying breaths.
Unfortunately, these inferences don’t seem terribly useful for our problem, in fact, in some ways, they’re actively harmful. If I’m stretching, all this talk about symmetry might mean an exploration into group theory is warranted.
#2, the internal structure of a piece of matter need not be symmetrical even if the total state of it is. This seems actively harmful. The analogy I can pretend I see if I squint would be the idea of highly composite numbers. Take 60, the smallest number to be divisible by 2, 3, and 5. That’s a lot of symmetry. Now look at 61. In some sense, you can say that the number 61 “contains” 60.13 If you have 61 of something, you have 60 of them. But also, 61 is highly not composite, in fact, it’s prime. So, maybe that’s kinda what this is talking about, not really.
The third idea, that the state of a really big system does not need to have the symmetry of the laws which govern it, is so depressing that I don’t know where to go. The bigger something is, the less symmetry it has? That’s not what I want, I don’t think so at least. But anderson does talk about more structure given larger objects, so I suppose less symmetry means fewer restrictions, and allows more to grow? IDK, I need to stew on this a little more.
NEXT STEPS
I think one of the next steps for me would be to actually learn some Ramsey Theory. In particular, after skimming the wikipedia page, the idea of partition regularity makes sense, as it seems to fit the idea of bigger numbers having more “structure”.
Finish reading the paper. I only covered up to a certain point, as I was reaching the limits of my understanding.
Collect a lot of objects, (pebbles or something), in one place. I mean a lot of pebbles, really try to build up intuition on what it means to hold that many things at the same time. Like that old Mitch Hedberg joke about rice.
Supposedly the limit for humans with concepts is that we can hold 7 things in our head at the same time, but we can remember phone numbers, or at least we used to be able to, so there’s surely some wiggle room. Besides, this is technically one thing to behold, it’s just a really big thing.
This post was twice as long as usual, and it took four times as long to write.1415 I’m gonna try to write a whole other new post today, from start to finish, to make sure I keep the streak going.
Perhaps a false attribution, see https://hsm.stackexchange.com/questions/12992/a-quote-attributed-to-gauss “Don’t trust anything you read on the internet” -Abe Lincoln
Unfortunately, this blog post will pale in comparison to the paper. In a very real sense, the best thing to do as a reader of this blog would be to click on all the links, and read those papers and posts instead. Instead of half-cribbed rambling, you would receive fully original rants.
Can you believe that people used to do maths without shorthand? That would be like reading an essay to learn about math. Lol, jokes on you. Also makes me think about what other technologies are out there. If something as simple as representing a number with n, can do so much for readability, I really gotta review that tools for thought blogpost.
Yes, this means I believe there’s a biggest number. No, I can’t tell you what that number is. You might think that’s ridiculous. Yet when I say, “no one knows the day I’m going to die, so I’m gonna live forever”, you think I’m the ridiculous one. Heads you win, tails I lose, whatever. Numbers are by definition finite, like days, or cotton candy.
The idea of shaky knowledge is interesting, really the idea isn’t that the knowledge itself is shaky, but rather the foundation, or presuppositions are. But I suppose the foundation isn’t really shaky or loose, it’s more about whether it actually exists or not. There’s a furthering of this idea, I believe by Nozick, about his life philosophy being more resilient, the idea that instead of a long chain of logic building a tall mountain, he maintains a large set of intuitions, all separate from each other, to support a point. This way, if one of the intuitions fails, the point is still supported, in contrast to the long chain of logic, which immediately breaks in the event of one mis-step. Also an analogy from Jhourney, on their idea of collectedness vs concentrated. Alright, end this thought now, that should be a separate blog post. Classic top down vs. bottom up. (Resilience, antifragile, etc.)
Mathematicians really suck at naming things.
This is unflattering, and possibly misleading enough to be untrue. I skimmed the Wikipedia, and have done projects in graph theory, and on summarizing the Erdos-Szemeredi Theorem. That being said, I have never taken a class in Ramsey theory, nor read a textbook. So at best, my knowledge is fairly surface level.
Apologies for all the math words, at some point you have to give in to brevity, but I will admit to picking an odd hill to die on, given the long-windedness of the rest of this essay.
And all rectangles are squares.
Though I doubt it, ask Tegmark about that.
The refusal to explain it is also because I am not up to the task.
I believe that in math, this actually depends on how you characterize the natural numbers in set theory. If you write 0 = {} = Ø, 1 = {0} = {Ø}, 2 = {0,1} = {Ø, {Ø}}, 3 = {0,1,2} = {Ø,{Ø}, {Ø, {Ø}}}, and so on, you do have each number containing all numbers less than it.
And it’s half as good. JK, but it does get harder to maintain cohesiveness the longer the post. Bad sign for my theory lol.
Also too many footnotes.