Hey!
This is fantastic and actually ties in some very high disparate parts of math. Basically, reorient & reformulate all of math/epistomology around discrete sampling the continuum. Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)
Then every modal logic becomes some mapping of 2^(N) to some set of statements. The only thing that matters is how predictive they are with some sort of objective function/metric/measure but you can always induce an "ultra metric" around notions of cognitive complexity classes i.e. your brain is finite and can compute finite thoughts/second. Thus for all cognition models that compute some meta-logic around some objective F, we can motivate that less complex models are "better". There comes the ultra measure to tie disparate logic systems. So I can take your Peano Axioms and induce a ternary logic (True, False, Maybe) or an indefinite-definite logic (True or something else entirely). I can even induce bayesian logics by doing power sets of T/F. So a 2x2 bayesian inference logic: (True Positive, True Negative, False Positive, False Negative)
Fun stuff!
Edit: The technical tldr that I left out is unification all math imho: algebraic topology + differential geometry + tropical geometry + algebraic analysis. D-modules and Microlocal Calculus from Kashiwara and the Yoneda lemma encode all of epistemology as relational: either between objects or the interaction between objects defined as collision less Planck hyper volumes.
basically encodes the particle-wave duality as discrete-continuum and all of epistemology is Grothendieck topoi + derived categories + functorial spaces between isometry of those dual spaces whether algebras/coalgebra (discrete modality) or homologies/cohomologies (continuous actions)
Edit 2: The thing that ties everything together is Noether's symmetry/conserved quantities which (my own wild ass hunch) are best encoded as "modular forms", arithmetic's final mystery. The continuous symmetry I think makes it easy to think about diffeomorphisms from different topoi by extracting homeomorphisms from gauge invariant symmetries (in the discrete case it's a lattice, but in the continuous we'd have to formalize some notion of liquid or fluid bases? I think Kashiwara's crystal bases has some utility there but this is so beyond my understanding )
> Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)
There’s probably ten+ years of math education encoded in this single sentence?
My apologies to ikrima for being critical, but I think anyone who thinks "aleph/beth/Betti numbers" is a coherent set of things to put together is just very confused.
Aleph and beth numbers are related things, in the field of set theory. (Two sequences[1] of infinite cardinal numbers. The alephs are all the infinite cardinals, if the axiom of choice holds. The beth numbers are the specific ones you get by repeatedly taking powersets. They're only all the cardinals if the "generalized continuum hypothesis" holds, a much stronger condition.)
[1] It's not clear that this is quite the right word, but no matter.
Betti numbers are something totally different. (If you have a topological space, you can compute a sequence[2] of numbers called Betti numbers that describe some of its features. (They are the ranks of its homology groups. The usual handwavy thing to say is that they describe how many d-dimensional "holes" the space has, for each d.)
[2] This time in exactly the usual sense.
It's not quite true that there is no connection between these things, because there are connections between any two things in pure mathematics and that's one of its delights. But so far as I can see the only connections are very indirect. (Aleph and beth numbers have to do with set theory. Betti numbers have to do with topology. There is a thing called topos theory that connects set theory and topology in interesting ways. But so far as I know this relationship doesn't produce any particular connection between infinite cardinals and the homology groups of topological spaces.)
I think ikrima's sentence is mathematically-flavoured word salad. (I think "Betti" comes after "beth" mostly because they sound similar.) You could probably take ten years to get familiar with all the individual ideas it alludes to, but having done so you wouldn't understand that sentence because there isn't anything there to understand.
BUT I am not myself a topos theorist, nor an expert in "our human brain's sensory instruments". Maybe there's more "there" there than it looks like to me and I'm just too stupid to understand. My guess would be not, but you doubtless already worked that out.
[EDITED to add:] On reflection, "word salad" is a bit much. E.g., it's reasonable to suggest that our senses are doing something like discrete sampling of a continuous world. (Or something like bandwidth-limited sampling, which is kinda only a Fourier transform away from being discrete.) But I continue to think the details look more like buzzword-slinging than like actual insight, and that "aleph/beth/Betti" thing really rings alarm bells.
also you're onto the actual quantum mechanics paper I'm working on. QM/QFT is modern day epicycles: arbitrarily complex because it was the aliasing the natural deeper representation which was Fourier/Spectral analysis.
Reformulating our entire ontology around relational mechanics is the answer imho. So Carlo Ravoli's RQM is right but I think it doesn't go far enough. Construct a grothendeik topos with a spacetime cohomology around different scales of both space and time with some sort of indefinite conservation and you get collision less Planck hyper volumes that map naturally to particle-wave duality interpretations of QM.
lol, it's a sketch of a proof covering a large swath of unexplored math. the other poster wasn't wrong when he said I smashed 10y+ of graduate math in one sentence.
Aleph numbers = these are cardinals sizes of infinity; depending on your choice of axioms, ZFC or not, you have the continuum hypothesis of aleph0 = naturals, aleph1= 2^N = Continuum
Beth numbers are transfinite ordinals => they generalize infinitesimals like the 1st, 2nd, 3rd. so you can think of them as a dual or co-algebra (I'm hand waving here, it's been twenty years since real analysis).
Betti numbers are for persistent cohomology; they track holes similar to genus
I mean there's a lot to cover between tropical geometry, differential geometry, and algebraic analysis. So sometimes alarm bells are false alarms and your random internet commenter knows what he's talking about but is admittedly too sloppy but it's 5 pm on a Saturday and I wrote that in the morning while making breakfast eggs, not for submission to the annals of Mathematics!
Also, if you're really that uptight, most of this is actually to teach algebraic topology to my autistic nonverbal nephew because I'm gonna gamify it as a magic spell system
So it'll be open source and that begs the question, if you use it to learn something, did that mean I just zero-proof zero-knowledge something out of you that I didn't even need to know by making a self referential statement across both space & time?
The comment you're replying to already explained what aleph, beth and Betti numbers are. (But a few nitpicks: 1. Beth numbers are not ordinals, they're cardinals. They're indexed by ordinals, just as the alephs are, but if that's what you care about why not use the ordinals themselves? 2. I'm not seeing how you get from "Beth numbers are indexed by ordinals" to "they generalize infinitesimals" to "you can think of them as a dual". Not saying there isn't something there, but I think you could stand to unpack it a bit if so. 3. Betti numbers are not only for persistent (co)homology; they were around long before anyone had thought of persistent (co)homology.)
It's certainly possible (as I explicitly said before) that my bad-math-alarms have hit a false positive here. You haven't convinced me yet, for what it's worth. (You need not, of course, care whether you convince me or not. It's not as if my opinion is likely to have any effect on you beyond whatever you might feel about it.)
I think we're vehemently in semantic agreement but hn comment threads are two bandwidth limiting to discuss tropical geometry and speculative mathematics that require decades of abstract algebra, geometry, and Galois theory :)
It would be plenty enough if I needed to get started. But you don't seem to be paying sufficient attention to what I wrote to notice that I already know what the beth numbers are and that unlike you I haven't written anything flatly false about them in this discussion.
I'm aware I'm being a bit dickish about this, which I regret, but I'm not sure how else to respond to what seem like repeated deliberate attempts to frame this as "ikrima, the expert, kindly condescends to provide some elementary mathematics education to gjm11, the novice" which doesn't appear to me to be an accurate characterization of the situation.
:P I had a stroke; typing is literally difficult. I'm trying to say don't read too much into it, i can't really have a conversation on a comment thread b/c of brain injury. I think the emoji's get stripped out so maybe my tone seems more abrasive than whimsical
but also, i mean you are just flat out wrong on some very big parts. E.g.: i think in 2024 or 2023, there was a big breakthrough in geometrization of Langlands. IIRC, there was a second big break through on the discrete-continuum connection relating to primes in some manner but can't remember specifics off top of my head.
i think you're confusing maybe what Beth numbers are used for vs. what i'm proposing that they be repurposed for. You're right, no one is using them the way i referenced but that's kind of what math research is...?
(Conversational difficulties acknowledged. And, if trying to conduct this conversation here is just too inconvenient or annoying, I promise I will not take dropping out as admission of defeat or anything like that.)
I'm not sure how progress on geometric Langlands means I'm "flat out wrong on some very big parts". I certainly didn't say "there has been no progress on the geometric Langlands conjecture recently" or anything like that, nor did I say anything like "there is never any connection between discrete and continuous things in mathematics". (Not least because that's obviously very false.) So I don't understand what it is I've said that you think progress on the geometric Langlands conjecture is a counterargument to.
I understand that you're describing things that (you hope) could be done, rather than things that are already standard practice. But I don't think you've given any actual indication of how beth numbers in particular are going to be relevant. They're a very specific thing, and nothing you've said about them seems to make any contact with what they are in a way that would distinguish them from (say) the alephs or even the ordinals.
What I'm not seeing is any sign that there's more here than (1) a very general idea that could be stated perfectly well without any very advanced mathematics (e.g., "wouldn't it be nice to have a unified theory of how continuous and discrete things relate to one another, and apply it to human perception and cognition?") and (2) a bunch of buzzwords from various fields of advanced mathematics that have some superficial connection to #1, and (3) a handwavy assertion that somehow these things are all deeply connected ... without any actual deeper connection than "these things all sound like they might relate to one another".
What makes all that advanced mathematics worth the effort is its precision and rigour. Maybe all the precision and rigour is actually there and you just haven't chosen to show it to us. Maybe you have a deep enough intuition for these things that we should just trust that the details can be filled in precisely and rigorously. If it were, say, Terry Tao saying these things then I'd be inclined to trust him (though only provisionally; very smart mathematicians do sometimes have grand visions that never work out). Maybe, maybe, maybe. But as yet I haven't seen the evidence. And, whereas very smart mathematicians do sometimes have grand visions that never work out, the grand visions of amateurs unfortunately almost never turn out to have much substance. (Not literally never. Ramanujan was the real deal, for instance. But for every Ramanujan there are thousands of people who are, well, Not Ramanujan.)
Yeah, sorry! You're 100% right. I really wish I could have this conversation in person! I can barely have it normally as it stretches at the edge of my math abilities and didn't think it through before I autistically responded :P
I mean you wouldn't be wrong to assume so but how can you expect anyone to saliently condense the entirety of a 10 year long proof of Grothendieck topos to 3 or 4 sentences my guy!
[EDITED to add:] This is worth noting because today's LLMs really don't seem to understand mathematics very well. (This may be becoming less so with e.g. o3-pro and o4, but I'm pretty sure that document was not written by either of those.) They're not bad at pushing mathematical words around in plausible-looking ways; they can often solve fairly routine mathematical problems, even ones that aren't easy for humans who unlike the LLMs haven't read every bit of mathematical writing produced to date by the human race; but they don't really understand what they're doing, and the nature of the mistakes they make shows that.
(For the avoidance of doubt, I am not making the tired argument that of course LLMs don't understand anything, they're just pattern-matching, something something stochastic parrots something. So far as I can tell it's perfectly possible that better LLMs, or other near-future AI systems that have a lot in common with LLMs or are mostly built out of LLMs, will be as good at mathematics as the best humans are. I'm just pretty sure that they're still some way off.)
(In particular, if you want to say "humans also don't really understand mathematics, they just push words and symbols around, and some have got very good at it", I don't think that's 100% wrong. Cf. the quotation attributed to John von Neumann: "Young man, in mathematics you don't understand things, you just get used to them." I don't think it's 100% right either, and some of the ways in which some humans are good at mathematics -- e.g., geometric intuition, visualization -- match up with things LLMs aren't currently good at. Anyway, I know of no reason why AI systems couldn't be much better at mathematics than the likes of Terry Tao, never mind e.g. me, but they aren't close enough to that yet for "hey, ChatGPT, please evaluate my speculation that we should be unifying continuous and discrete mathematics via topoi in a way that links aleph, beth and Betti numbers and shows how our brains nucleate discrete samples of continuum reality" to produce output that has value for anything other than inspiration.)
Yup, it's 100% generated by an LLM. I thought that was intentionally clear? (I'm recovering from a TBI so I'm still adjusting to figuring out how to relearn typing; I use the LLMs as my voice mediated interface to typing out thoughts).
I'm not sure there's an argument I'm hearing here other than you seem to have triggered some internal heuristic of "this was written by an LLM" x "It contains math words I don't understand" => "this is bullshit"
which you wouldn't be wrong but I am making a specific constructionist modal logic here using infinity-groupoids from category theory. infinite dimensional categories are a thing and that's what these transfinite numbers represent
you have hyperreal constructionists of the reals as well which follows nonstandard analysis. you can also use the Weil cohomology which IIRC gets us most of calculus without the axiom of choice but someone check me on that.
so....again, not sure what your specific critique is?
No specific critique here other than "it was written by an LLM and this seems worth pointing out given that LLMs are bad at actually understanding difficult mathematics".
(In a different comment I make some actual criticisms of what you wrote. I see you replied to my comment there, and that's a more appropriate place to discuss actual ideas. I don't see much point in criticizing LLM output in a field LLMs are bad at.)
Anyway: (1) no, it wasn't clear. I wouldn't generally take "I nerd-sniped myself. Here's a more fleshed-out sketch of ..." to mean "Here's something written for me by an LLM". I'd take it to imply that the person had done the fleshing-out themself. And (2) no, the problem wasn't that you used words I don't understand. It's certainly possible that your ideas are excellent and I just don't understand them, but I'm a mathematician myself and none of the words scare me.
"no, it wasn't clear. I wouldn't generally take "I nerd-sniped myself. Here's a more fleshed-out sketch of ..." to mean "Here's something written for me by an LLM". I'd take it to imply that the person had done the fleshing-out themself."
aaaaaaaah, I think you finally helped me notice something subtle in the way I use LLMs than other people. It sounds obvious now that I think about it but I never considered people use LLMs like google whereas I use it more like a real time thought transcriber (e.g. Dragon Naturally Speaking but not shite :P) Since it's trained on a RAG based off of my own polished thoughts, I've set it up as a meta-circular evaluator to do linguistic filtration (basically Fourier kernels on clip embedding space that map to various measures of "conceptual clarity").
So the LLM-ness of it to me is a clear-flag that this is hastily dictated.
You know what, since you put in all that work, here's my version using p-adic geometry to generalize the concept of time as a local relativistic "motive" (from category theory) notion of ordering (i.e. analogous to Grothendieck's generalization of functions as being point samples along curves of a basis of distributions to generalize notions of derivatives):
I love this insanely insightful crystalization for me. I see you got a degree in IR and you used the word axiom so I'd refract back to you: reformulate stupidity as ignorance and assume a continuous conserved action exists between < stupidity-ignorance-malice | danger > so that your observable is a danger measure-metric.
Basically the reason ignorance is so danger is that it aliases between harmless stupidity and extreme evil, but it fails to trigger our collective species evil detection pattern coding which results in an unbelievable cascade evil failure mode.
In a sense, religion is a meta-epistemic solution to the binding problem of our collective subconscious.
I think if you apply the epistemic quantum mechanics to different logic modalities, you get some interesting insights!
> Basically the reason ignorance is so danger is that it aliases between harmless stupidity and extreme evil, but it fails to trigger our collective species evil detection pattern coding which results in an unbelievable cascade evil failure mode.
Great summary. The problem with ignorance is that you are more likely to forgive the perpetrator whereas a malicious actor would have been cutoff once detected. You then stick with the ignorant for way longer and suffer loses beyond what a malicious actor would've likely caused. This is my impression from relationships at work.
I think Bonhoeffer was referring to an acquired or affected stupidity; a position adopted defensively to fit into a social or political situation.
If the truth becomes dangerous or unpopular, a decent defense is adopting stupidity. I think that is subtly different from ignorance, which implies never knowing, as opposed to a rejection of truth.
Like much cognitive dissonance, it can be easiest to live with if you just change your beliefs rather than trying to rail against it.
The danger is, once truth is denied, reality becomes disconnected and atrocity much more abstract.
Maybe there is a better name yet for the phenomenon.
I'd say watch out for the "One brain" fallacy/cognitive bias in all of these responses.
There are lots of factors (age, discipline, interest in what you're doing, etc) but I think a lot of people overlook that one big component of this is brain chemistry. Some athletes are endurance athletes, some athletes are sprinters. It seems obvious that you wouldn't expect Usain Bolt to run the fastest marathon just because he's a runner.
Our brains are just as genetically varied as other physical characteristics. For me, I know that my peak productivity is from 11 pm - 4 am and it goes down exponentially if I have to do non-fun tasks or if I'm in a stressful crunch as opposed to a positive crunch. I also know that after 3-4 months of 10-12 hours, I need to go back down to 5-6 hours until I get excited again. And for some strange reason, if I don't feel something is required, I can leisurely work on something for 12-16 hours a day indefinitely (researching, reading conference papers, learning). If I have to debug code nasty bugs like multithreaded data races, I can't be productive at those lengths of time so I try to mix fun work with this type of dreaded work.
Know what works for you and build your work schedule around that. That's how we work at our startup.
Just tried google stalking a way to reach you but came short. I'm in vfx r&d and wanted to ask you more about your performance optimization techniques. You mind sending your email to [email protected]?
So the investors are just 20-30 high net worth individuals? From an investing point of view, I'd be curious to see the returns and mechanics of VC funds. The highest returns I've ever come across on a hedge fund was 30% ROE over 10 years....until the quant fund blew up with the housing crises
So the investors are just 20-30 high net worth individuals?
More often 20-30 high net worth institutions -- university endowment funds, pension funds, and suchlike -- but I imagine many VCs have a few individual investors too.
awesome; that's what i was looking for. And, I was also hoping to get advice on algorithmic or architecture knowledge on distributed web apps as opposed to specific details on languages.
Ex: How facebook scales to 500+ million users or how flickr can serve so many photos that fast. Obviously for my flickr app clone, I don't need that level of performance but I'd at least like some pointers on how to go about it the right way.
Well Facebook uses PHP and HipHop which is an implementation that converts the PHP to C and compiles it as machine code as opposed to interpreted code. It is how they scale on PHP as for flicker they use PHP as well you may want to look at this document, it outlines the flicker architecture and what they did.
PHP is with out a doubt the most popular web 1.0 server side language, and if you are looking to do this as a job, then it is well worth it. Personally, I would rather jab my eyes out than work with another line of PHP. It was originally designed by a group of people that had neither extensive language experience nor web experience. It is kludged together and you feel it ever step of the way. Over time, people that do know what they are doing have fixed some of it, but there is still some crap legacy in it. I swore off PHP and will not take a contract that mandates it.
Now for the good part, it is fast to develop in, it has a lot of support, there is enough stuff built in it that you can find almost anything you need. Wordpress and Drupal are two of the most popular CMS systems and are written with PHP. You can build a site in 4hrs with either one and a template from one of the template sites. It's like crack of the web development world, cheap, dirty, powerful and it gives you want you want, but no ones admits to using it.
Then every modal logic becomes some mapping of 2^(N) to some set of statements. The only thing that matters is how predictive they are with some sort of objective function/metric/measure but you can always induce an "ultra metric" around notions of cognitive complexity classes i.e. your brain is finite and can compute finite thoughts/second. Thus for all cognition models that compute some meta-logic around some objective F, we can motivate that less complex models are "better". There comes the ultra measure to tie disparate logic systems. So I can take your Peano Axioms and induce a ternary logic (True, False, Maybe) or an indefinite-definite logic (True or something else entirely). I can even induce bayesian logics by doing power sets of T/F. So a 2x2 bayesian inference logic: (True Positive, True Negative, False Positive, False Negative)
Fun stuff!
Edit: The technical tldr that I left out is unification all math imho: algebraic topology + differential geometry + tropical geometry + algebraic analysis. D-modules and Microlocal Calculus from Kashiwara and the Yoneda lemma encode all of epistemology as relational: either between objects or the interaction between objects defined as collision less Planck hyper volumes.
basically encodes the particle-wave duality as discrete-continuum and all of epistemology is Grothendieck topoi + derived categories + functorial spaces between isometry of those dual spaces whether algebras/coalgebra (discrete modality) or homologies/cohomologies (continuous actions)
Edit 2: The thing that ties everything together is Noether's symmetry/conserved quantities which (my own wild ass hunch) are best encoded as "modular forms", arithmetic's final mystery. The continuous symmetry I think makes it easy to think about diffeomorphisms from different topoi by extracting homeomorphisms from gauge invariant symmetries (in the discrete case it's a lattice, but in the continuous we'd have to formalize some notion of liquid or fluid bases? I think Kashiwara's crystal bases has some utility there but this is so beyond my understanding )