It’s a cute story but it’s not constant-time in any sense.
Creating ‘N’ threads and adding them all to a sorted wake list will be between O(N log N) and O(N^2), depending on the OS and/or language runtime.
There is either a sorted list, a heap, or an N^2 algorithm somewhere behind the scenes.
Similarly, the sleep sort itself is at least linear time. You have to wake N threads to output N sorted items.
Worse, in wall clock time it also scales with the values. You could compress the range by first finding the minimum and maximum values, but that’s also … linear time.
To risk killing the joke: when I said "constant time", I alluded to the words and art of time complexity analysis but actually did not mean to go there. This is a double entendre that uses two conflicting views of the word "time" because yes it is impossible to make a sorting algorithm constant time in complexity analysis terms. The joke is that the actual intent of the statement is wall clock time. This is the time that is more relevant in job interviews (and realistically when people pull the "write an integer sort function" thing out of their ass for me to defecate code onto the screen, they don't use numbers smaller than like 100, meaning that this program will be almost perceptually instant to run).
It's a subtle metalinguistic joke. It's supposed to poke fun at understandings of how computer science works by turning them on their head.
It's not a "subtle metalinguistic joke". You're just redefining words. The algorithm isn't constant in complexity theory terms and it's also not constant in "wall-clock time" (I'm not sure why you think the two are unrelated?), except if you use a model of computation that is so bonkers that it would be of no value.
Maybe it works as a sort of dadaistic literature, like the ones where they redefine "chair" to mean "table" and so on, but beyond that?
Both your article and this comment of yours show that you're annoyed at algorithmic interview questions, so you're trying (well, at least in this fantasy) to one-up your interviewers by being smarter, and technically correct, but in an unexpected way. I understand the sentiment, but unfortunately, this only works when you're actually at least technically correct.
Can the wall time be considered constant if it depends on the magnitude of the numbers to sort? A list of numbers between 1 and 1000 will be much faster to sort than a list of numbers between 1 and 10^100. EDIT: I did get the joke when I read your post, and laughed to myself, so well done :)
The constant is the biggest number in the list multiplied by the sleep multiplicand. That plus overhead gets you the approximate runtime of the function.
If you just repeat the same number a million times, then the threads will all be scheduled to run at the same time, but of course, your computer doesn't have a million cores, so the threads will have to be executed partially in sequence. That means, even if you assume the largest integer is a constant (valid for some integer types, but notably not Integer in Haskell), the runtime will be proportional to the number of elements in the list.
Yeah, but technically that doesn't qualify as a constant since it depends on the input. If you explicitly constrain the problem space such that the magnitude of the numbers in the list is bounded by a fixed upper bound, then it works out. That's fine though, but then it's not a fully general sorting algorithm, like e.g quicksort
Sure, but in such a world asymptotic complexity has little meaning. For practical purposes it's useful to make simplifying assumptions that are done in complexity theory.
In theory, since the argument to the sleep must necessarily devolve to an integer eventually, it could use a radix-sort or similar to perform in linear time. There are problem spaces where that can be a win, even after introducing a dependency on the magnitude of the largest value.
In practice, of course, no system actually does this. Syscall timeouts just aren't a place where this is typically beneficial. Also, naturally, it would be better to just directly apply a radix-sort which only scales with log(max_value) rather than linearly with max_value.
> Creating ‘N’ threads and adding them all to a sorted wake list will be between O(N log N) and O(N^2), depending on the OS and/or language runtime.
This is not a fundamental limitation of scheduling systems, especially if you take into account the possibility of special hardware allowing for constant-time-with-respect-to-number-of-threads scheduling. For example, it is trivial, though economically infeasible for any practical purpose, to construct a scheduler that performs scheduling by bouncing infopackets via laser beams off of a gargantuan set of mirrors of varying distances back onto a detector connected to the computer, relying on the speed of light to perform a delay of the specified duration. This shows that sleep sort does not inherently rely on the hidden algorithmic complexity of whatever thread scheduling approach is used, since that can be optimized to O(1) in theory, if not in practice.
There's a parallel universe where someone wrote the k8s hello world story and made fun of people testing on the most obscure sorting algorithms within it instead of the opposite here, and the psychological profile painted of the interviewed person is mostly the same. Maybe not as punchy due to all the YAML ofc.
Creating ‘N’ threads and adding them all to a sorted wake list will be between O(N log N) and O(N^2), depending on the OS and/or language runtime.
There is either a sorted list, a heap, or an N^2 algorithm somewhere behind the scenes.
Similarly, the sleep sort itself is at least linear time. You have to wake N threads to output N sorted items.
Worse, in wall clock time it also scales with the values. You could compress the range by first finding the minimum and maximum values, but that’s also … linear time.