Take the approximate number of subatomic particles in the universe, call it Ω. Define the largest number as Ω² and the smallest number as -Ω², and define the number of decimal numbers between each integer number as Ω², evenly spaced. That should be more than enough numbers. Redefine Ω with each new discovery in physics.
I don’t understand, and I hope it’s just bad writing.
Certainly you can build a branch of mathematics without an axiom of infinity, and that’s fine, it’s math over finite sets.
However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.
He may think the axiom of infinity isn’t satisfied by our real physical world, but that’s not a math question! There’s nothing logically inconsistent about infinite sets nor their axiomatizations.
The idea that nothing is demonstrative of infinity is clearly incorrect.
Take the screen you're reading this on. One pixel is composed of a bunch of different atoms, and once you get down to one of them, that atom subdivides into a bunch of subatomic particles, some of which even have mass. Let's take one of those for argument's sake. Split that, and you get some quarks.
Now let's imagine that's the smallest you can go. We can still talk about half of a down quark, or half of that, etc. Say, uh, infinitely so. There you go, everything is infinite. That wasn't so hard was it?
> To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is.
I'm hoping this is just bad writing from Quanta rather than something "ultrafinitists" truly believe.
I really don't think it's that complicated. Even pre-schoolers, competing to see who can say the highest number, quickly learn the concept of infinity. Or elementary school students trying to write 1/3 as a decimal.
Of course you need to be careful mapping infinity onto the physical world. But as a mathematical concept, there is absolutely nothing wrong with it.
> Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely.
This seems like a useful concept that also doesn't require denying the very obvious concept of infinity.
They pretty quickly realize that there is no winning because you can always just say more numbers than the last kid - there is no biggest number. Usually something like "a hundred million million million million million and two", "a hundred million million million million million and three", etc.
And then someone, whose friend or older brother taught them the concept, blurts out "infinity". And after a quick explanation, everyone more or less gets it.
When I was about ten, a math teacher once asked me whether the number 0.9999... (infinitely repeating) was different than 1. I said, with my child's intuition, that of course it was. He then challenged me to write down a number that was in between them, because if they were not the same number then there would be many (in fact, infinitely many) numbers between them. I couldn't, of course: the best I could do was to write 0.9999...5, which falls into the same category error as "infinity plus one / infinity plus two".
Now, decades later, I get it better. The number 0.99999... is 9/10 + 9/100 + 9/1000 + 9/10000 + ..., which approaches 1 asymptotically the same way that 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... approaches 1. Under many circumstances, you can treat that number as if it was 1, which neatly answers Zeno's Paradox. (Though beware of the limitations of that analysis: 1/n approaches infinity as n approaches 0, but 1/0 is not equal to infinity. Because 1/n approaches infinity only as n approaches 0 from the positive direction. If you look at the sequence 1/-0.1, 1/-0.01, 1/-0.001, etc. where n approaches 0 from the negative direction, that approaches negative infinity. A function that has two different limits as you approach the same number from two different directions cannot have its limit substituted like that).
Only if they live forever, which they won't. They can only count so fast, and there are only so many of them. Even if every atom in the observable universe was counting at, idk, 1GHz, that's still a finite number. The universe is not (as far as we know for certain) infinitely old. Time may extend infinitely into the future, or it may not. We don't know. So far as we know for sure everything is in fact finite.
In school I developed a strong hunch that continuity and infinity are "convenient delusions" we have that allow us to process the otherwise horrific complexity of the world. Experiencing time, sound, or visual motion as continuous, rather than discrete signal inputs is so much simpler. Similarly, the mathematical tricks and shortcuts we can use on well behaved continuous functions are both "unreasonably effective" and... probably not grounded in actual reality[1]? But damn are they convenient.
[1] EDIT: the reasoning is simple, if naive: the largest quantities we can measure are not, in fact, infinitely large, and the smallest ones we can measure are not, in fact, infinitesimally small. So until you show me an infinitesimal or an infinity, you're just making them up!
I've always felt that to treat infinity as number is to commit a category error (aka type conflict), to confuse the process with the outcome of the process. Infinity has proven to be very useful, but usefulness doesn't make it always valid.
It's not a new idea, and it's a challenging one to investigate. Without real numbers (that are infinitely long) most of the calculus stops working. And everything that depends on it.
Perhaps we can recover some of it by treating the infinitely variable values as approximations of the more discrete values and then somehow proving that the errors from them stay bounded, for at least some interesting problems.
All indications seem to be that things are only lost, not gained. But that doesn't mean it doesn't hew closer to how things actually are. But if that's how reality actually is, then developing a rigorous understanding of it can only be a good thing, right?
Rejecting infinity is a purely philosophical stance that doesn’t teach us anything about reality.
There is a big difference between “infinity doesn’t exist” and “infinity doesn’t exist physically”.
I should also add that the resolution of zeno’s paradox in the form of calculus where and infinite set of steps can occur in a finite time (or infinite set of distance can span a finite total distance) is conceptually very simple and useful. Rejecting it as unphysical, or saying it must imply time or space come in discrete chunks, is not contributing to an understanding of reality unless the rejection also comes with a set of testable (in principle) predictions.
EDIT: you could even probably claim "nothing exists which isn't physically measureable" which may or may not be a stronger claim depending on your point of view.
EDIT AGAIN: rate limited by this dogshit website :D but I'll respond to this comment here:
> Which is exactly why I mentioned rejection of zero, negative numbers, etc. You can reject them, but doing so just throws away useful tools without gaining anything in return.
Yeah! I fully agree. I can see no obvious benefit to rejecting these powerful tools. However, important discoveries often happen in non-obvious directions, and exploring unexplored territory is generally worthwhile. So the fact that it doesn't seem immediately useful doesn't mean it's not worth trying!
The first thing that came to mind reading the article is that you need only 60ish digits of pi to calculate the circumference of the universe with a resolution of a Planck length, or something like that. You can have all the digits you want, but at some point you are beyond what is possible in reality, and giving back wrong answers for what you are trying to achieve.
Take the approximate number of subatomic particles in the universe, call it Ω. Define the largest number as Ω² and the smallest number as -Ω², and define the number of decimal numbers between each integer number as Ω², evenly spaced. That should be more than enough numbers. Redefine Ω with each new discovery in physics.
I don’t understand, and I hope it’s just bad writing.
Certainly you can build a branch of mathematics without an axiom of infinity, and that’s fine, it’s math over finite sets.
However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.
He may think the axiom of infinity isn’t satisfied by our real physical world, but that’s not a math question! There’s nothing logically inconsistent about infinite sets nor their axiomatizations.
And no discussion of Zeno? Pish.
The idea that nothing is demonstrative of infinity is clearly incorrect.
Take the screen you're reading this on. One pixel is composed of a bunch of different atoms, and once you get down to one of them, that atom subdivides into a bunch of subatomic particles, some of which even have mass. Let's take one of those for argument's sake. Split that, and you get some quarks.
Now let's imagine that's the smallest you can go. We can still talk about half of a down quark, or half of that, etc. Say, uh, infinitely so. There you go, everything is infinite. That wasn't so hard was it?
> To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is.
I'm hoping this is just bad writing from Quanta rather than something "ultrafinitists" truly believe.
I really don't think it's that complicated. Even pre-schoolers, competing to see who can say the highest number, quickly learn the concept of infinity. Or elementary school students trying to write 1/3 as a decimal.
Of course you need to be careful mapping infinity onto the physical world. But as a mathematical concept, there is absolutely nothing wrong with it.
> Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely.
This seems like a useful concept that also doesn't require denying the very obvious concept of infinity.
I’m pretty certain a finite number of pre-schoolers can only recite a finite number of numbers.
Yes, they could on indefinitely, but will they ever?
They pretty quickly realize that there is no winning because you can always just say more numbers than the last kid - there is no biggest number. Usually something like "a hundred million million million million million and two", "a hundred million million million million million and three", etc.
And then someone, whose friend or older brother taught them the concept, blurts out "infinity". And after a quick explanation, everyone more or less gets it.
INFINITY PLUS 1
Uncountable infinity
And then the next kid says "infinity plus two", which is a perfectly acceptable progression, and the cycle starts again.
When I was about ten, a math teacher once asked me whether the number 0.9999... (infinitely repeating) was different than 1. I said, with my child's intuition, that of course it was. He then challenged me to write down a number that was in between them, because if they were not the same number then there would be many (in fact, infinitely many) numbers between them. I couldn't, of course: the best I could do was to write 0.9999...5, which falls into the same category error as "infinity plus one / infinity plus two".
Now, decades later, I get it better. The number 0.99999... is 9/10 + 9/100 + 9/1000 + 9/10000 + ..., which approaches 1 asymptotically the same way that 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... approaches 1. Under many circumstances, you can treat that number as if it was 1, which neatly answers Zeno's Paradox. (Though beware of the limitations of that analysis: 1/n approaches infinity as n approaches 0, but 1/0 is not equal to infinity. Because 1/n approaches infinity only as n approaches 0 from the positive direction. If you look at the sequence 1/-0.1, 1/-0.01, 1/-0.001, etc. where n approaches 0 from the negative direction, that approaches negative infinity. A function that has two different limits as you approach the same number from two different directions cannot have its limit substituted like that).
> Yes, they could on indefinitely
Only if they live forever, which they won't. They can only count so fast, and there are only so many of them. Even if every atom in the observable universe was counting at, idk, 1GHz, that's still a finite number. The universe is not (as far as we know for certain) infinitely old. Time may extend infinitely into the future, or it may not. We don't know. So far as we know for sure everything is in fact finite.
finite moments. cherish them.
Normally amps only go up to ten… but this one goes to eleven. …it’s one louder ain’t it!?!
In school I developed a strong hunch that continuity and infinity are "convenient delusions" we have that allow us to process the otherwise horrific complexity of the world. Experiencing time, sound, or visual motion as continuous, rather than discrete signal inputs is so much simpler. Similarly, the mathematical tricks and shortcuts we can use on well behaved continuous functions are both "unreasonably effective" and... probably not grounded in actual reality[1]? But damn are they convenient.
[1] EDIT: the reasoning is simple, if naive: the largest quantities we can measure are not, in fact, infinitely large, and the smallest ones we can measure are not, in fact, infinitesimally small. So until you show me an infinitesimal or an infinity, you're just making them up!
I've always felt that to treat infinity as number is to commit a category error (aka type conflict), to confuse the process with the outcome of the process. Infinity has proven to be very useful, but usefulness doesn't make it always valid.
It's not a new idea, and it's a challenging one to investigate. Without real numbers (that are infinitely long) most of the calculus stops working. And everything that depends on it.
Perhaps we can recover some of it by treating the infinitely variable values as approximations of the more discrete values and then somehow proving that the errors from them stay bounded, for at least some interesting problems.
The article doesn’t really tell us what is gained by rejecting infinity.
And in general, why not also reject zero, negative numbers, irrational numbers, complex numbers, uncomputable numbers, etc.?
Seems like an article about quacks that can’t even agree on what the bounds and rules of their quackery are.
All indications seem to be that things are only lost, not gained. But that doesn't mean it doesn't hew closer to how things actually are. But if that's how reality actually is, then developing a rigorous understanding of it can only be a good thing, right?
Rejecting infinity is a purely philosophical stance that doesn’t teach us anything about reality.
There is a big difference between “infinity doesn’t exist” and “infinity doesn’t exist physically”.
I should also add that the resolution of zeno’s paradox in the form of calculus where and infinite set of steps can occur in a finite time (or infinite set of distance can span a finite total distance) is conceptually very simple and useful. Rejecting it as unphysical, or saying it must imply time or space come in discrete chunks, is not contributing to an understanding of reality unless the rejection also comes with a set of testable (in principle) predictions.
> There is a big difference between “infinity doesn’t exist” and “infinity doesn’t exist physically”.
Is there? I think one could make a decent case for "nothing exists which doesn't exist physically[1]".
[1] https://plato.stanford.edu/entries/physicalism/
EDIT: you could even probably claim "nothing exists which isn't physically measureable" which may or may not be a stronger claim depending on your point of view.
EDIT AGAIN: rate limited by this dogshit website :D but I'll respond to this comment here:
> Which is exactly why I mentioned rejection of zero, negative numbers, etc. You can reject them, but doing so just throws away useful tools without gaining anything in return.
Yeah! I fully agree. I can see no obvious benefit to rejecting these powerful tools. However, important discoveries often happen in non-obvious directions, and exploring unexplored territory is generally worthwhile. So the fact that it doesn't seem immediately useful doesn't mean it's not worth trying!
Which is exactly why I mentioned rejection of zero, negative numbers, etc.
You can reject them, but doing so just throws away useful tools without gaining anything in return.
The first thing that came to mind reading the article is that you need only 60ish digits of pi to calculate the circumference of the universe with a resolution of a Planck length, or something like that. You can have all the digits you want, but at some point you are beyond what is possible in reality, and giving back wrong answers for what you are trying to achieve.