UNIX signals *do not* queue. If two or more signals with the same number are sent faster than the receiving thread handles them (due to the signal being blocked and/or the thread not being scheduled), all but the last will be lost irrevocably. There is no mechanism to prevent this.
RT signals do get queued... that is one of the major differences (and yes, the essay is not using them, so your point stands as it is written, but using RT signals is a mechanism to prevent it).
Come to think of it, I think the original idea can be salvaged with an acknowledgment signal. Send bit, wait for acknowledgment, send next bit or retransmit accordingly. Actually you would need a handshake before each bit.
"Yes, we built a message broker using nothing but UNIX signals and a bit of Ruby magic. Sure, it’s not production-ready, and you definitely shouldn’t use this in your next startup (please don’t), but that was never the point.
"The real takeaway here isn’t the broker itself: it’s understanding how the fundamentals work. We explored binary operations, UNIX signals, and IPC in a hands-on way that most people never bother with.
"We took something “useless” and made it work, just for fun. So next time someone asks you about message brokers, you can casually mention that you once built (or saw) one using just two signals. And if they look at you weird, well, that’s their problem. Now go build something equally useless and amazing. The world needs more hackers who experiment just for the fun of it."
Unfortunately I bet that 90% won't even reach at that part and just ragebait based on the title. The golden rule of modern age is always do the disclaimer as soon as possible.
I'm truly curious to know why would this be rude, seriously. Maybe it's a cultural mismatch.
For me ragebait and rudeness are things like: "X sucks, use Y", "If you aren't doing W you're losing money", etc.
He never said that Kafka sucks, nor anything related, obviously you can't replace kafka with only two signals. I'm asking with all politeness as possible, I just wanna understand what other people consider improper behavior
I don't know if POSIX has a position on signal order. But I'm pretty sure it allows signals to be coallesced... if a process is sent the same signal several times before the handler is invoked, it's in spec to only invoke it once.
We run a very simple filesystem based queue that processes around 1 billion events a day. Makes use of XFS for it's better handling of large numbers of files.
Corporate tried to push us to replace it with SQS and it could not keep up / costs with through the roof
I enjoyed reading the article and found it interesting. Had no prior interest in the topic but it was an entertaining read. Last time I used binary and calculated with it was in high school. Didn’t know about UNIX signals and now I understand how processes are terminating.
I challenged ChatGPT the other day to design a bidirectional process interop in *nix and this was one of the suggestions. Until then I had only ever thought of pipes as unidirectional. I still thought it was bonkers. This looks like a neat prototype though.
Fun article but the title is definitely overstating the huge amount of functionality lost if you replace Kafka. Immediately on my mind would be durability and broadcasting.
https://ldpreload.com/blog/signalfd-is-useless