You’re thinking of this with the benefit of dedekind in your schooling - whether or not your calculus class told you about him.
Density - a gapless number line - was neither obvious nor easy to prove; the construction is usually elided even in most undergraduate calculus unless you take actual calculus “real analysis” courses.
The issue is this: for any given number you choose, I claim: you cannot tell me a number “touching” it. I can always find a number between your candidate and the first number. Ergo - the onus is on you to show that the number line is in fact continuous. What it looks like with the naive construction is something with an infinite number of holes.
I think you are getting away from my point, which pertains to what the article said, which is that mathematicians thought there were "gaps". What mathematician? Can I see the original quote?
The linguistic sleight-of-hand is what I challenge. What is this "gap" in which there are no numbers?
- A reader would naturally assume the word refers to a range. But if that is the meaning, then mathematicians never believed there were gaps between numbers.
- Or could "gap" refer to a single number, like sqrt(2)? If so, it obviously is not a gap without a number.
- Or does it refer to gaps between rational numbers? In other words, not all numbers are rational? Mathematicians did in fact believe this, from antiquity even ... but that remains true!
Regarding this naive construction you are referring to: did it precede set theory? What definition of "gap" would explain the article's treatment of it?
I don’t know the answers to all of your questions - but I believe you’d benefit from some mathematical history books around the formalization of the real analysis; I’m not the best person to give you that history.
A couple comments, though - first, all mathematics is linguistics and arguably it is all sleight of hand - that said the word “gaps” that you’ve rightly pointed out is vague is a journalists word standing in for a variety of concepts at different times.
The existence of the irrationals themselves were a secret in ancient greece - and hence known for thousands of years, but the structure of the irrationals has not been well understood until quite recently.
To talk precisely about these gaps, if you’re not a mathematical historian, you have to borrow terminology from the tools that were used to describe and formalize the irrationals -> if former concepts about the lines sound hand-wavy to you, it is because they WERE handwavy. And this handwaviness is about infinity as well, the two are intimately connected. In modern terms, the measure of the rationals across any subset of the (real) number line is zero - that is the meaning of the “gaps”. There is, between any two rationals, a great unending sea where if you were to choose a point completely at random, the odds of that point being another rational is zero.
EDIT: for a light but engaging read about topics like this, David Foster Wallace’s Everything and More is excellent.
It's getting better (in a C++ kinda way), certainly, but...
It's ultimately still driven my matching "random" identifiers (classes, ids, etc.) across semantic boundaries. Usually, the problem is that the result is mostly visual which makes it disproportionately hard to actually do tests for CSS and make sure you don't break random stuff if you change a tiny thing in your CSS.
For old farts like me: It's like the Aspect-Oriented Programming days of Java, but you can't really do meaningful tests. (Not that you could do 'negative' testing well in AOP, but even positive testing is annoyingly difficult with CSS.)
EDIT: Just to add. It's an actually difficult problem, but I consider the "separate presentation from content" idea a bit of a Windmill of sorts. There will always be interplay and an artificial separation will lead to ... awkward compromises and friction.
I think the Referer header kinda-sorta serves as mitigation for 3rd parties just (maliciously) hot-linking to, say, images on your domain, effectively forcing you to bear the cost of upload bandwidth for those images.
(And similar, it's just that images sprang to mind.)
Imagine a comparison function that needs to call sort() as part of its implementation. You could argue that's probably a bad idea, but it would be a problem for this case.
(You could solve that with a manually maintained stack for the context in a thread local, but you'd have to do that case-by-case)
Anyway, the larger point is that a re-entrant general solution is desirable. The sort example might be a bit misguided, because who calls sort-inside-sort[0]? Nobody, realistically, but these types of issues are prevalent in the "how to do closures" area... and In C every API does it slightly differently, even if they're even aware of the issues.
[0] Because there's no community that likes nitpicking like the C (or C++) community. I considered preempting that objection :). C++ has solved this, so there's that.
That you do not call it recursively by checking that the thread local is nil before invocation.
> a re-entrant general solution is desirable.
I know what you mean, but I just don't know why you want to emulate that in C. There is a real problem of people writing APIs that don't let you pass in data with your function pointer - the thread local method can solve 99% of those without changes to the original API.
But if you really want to do all kinds of first class functions with data, do you want to use C?
I can't speak for the parent poster, but for global function declarations, yes, absolutely.
It's infuriating when a type error can "jump" across global functions just because you weren't clear about what types those functions should have had, even if those types are very abstract. So early adopters learned to sprinkle in type annotations at certain points until they discovered that the top-level was a good place. In OCaml this pain is somewhat lessened when you use module interface files, but without that... it's pain.
> I think it's pretty widely agreed that requiring type annotations at the function level is a good thing anyway. Apparently it's considered good practice in Haskell even though Haskell doesn't require it.
In Haskell-land: At the global scope, yes, that's considered good practice, especially if the function is exported from a module. When you just want a local helper function for some tail-recursive fun it's a bit of extra ceremony for little benefit.
(... but for Rust specifically local functions are not really a big thing, so... In Scala it can be a bit annoying, but the ol' subtyping inference undecidability thing rears its ugly head there, so there's that...)
Languages with local type inference can sometimes omit type annotations from lambdas, if that lambda is being returned or passed as an argument to another function. In those situations we know what the expected type of the argument should be and can omit it.
Yeah, that's true and that's a good convenience even if it's not full inference. In the case of Scala, the parameter types may often be required, but at least the return type can be omitted, so there's that.
reply