My usecase was building an append-only blob store with mandatory encryption, but using a semaphore + direct goroutine calls to limit background write concurrency instead of a channel + dedicated writer goroutines was a net win across a wide variety of write sizes and max concurrent inflight writes. It is interesting that frankenphp + caddy came up with almost the same conclusion despite vastly different work being done.
this makes sense for your workload, but may the right primitive be a function of your payload profile and business constraints ?
in my case the problem doesn't arise because control plane and data plane are separated by design — metadata and signals never share a concurrency primitive with chunk writes. the data plane only sees chunks of similar order of magnitude, so a fixed worker pool doesn't overprovision on small payloads or stall on large ones.
curious whether your control and data plane are mixed on the same path, or whether the variance is purely in the blob sizes themselves.
if it's the latter: I wonder if batching sub-1MB payloads upstream would have given you the same result without changing the concurrency primitive. did you have constraints that made that impractical?
In my case, "background writes" literally means "do the io.WriteAt for this fixed-size buffer in another goroutine so that the one servicing the blob write can get on with encryption / CRC calculation / stuffing the resulting byte stream into fixed-size buffers". Handling it that way lets me keep the IO to the kernel as saturated as possible without the added schedule + mutex overhead sending stuff thru a channel incurs, while still keeping a hard upper bound on IO in flight (max semaphore weight) and write buffer allocations (sync.Pool). My fixed-size buffers are 32k, and it is a net win even there.
right — no variance, question was off target. worth noting though: the sema-bounded WriteAt goroutines are structurally a fan-out over homogeneous units, even if the pipeline feels linear from the blob's perspective. that's probably why the channel adds nothing — no fan-in, no aggregation, just bounded fire-and-forget.
If the performance charts are to be believed, this has uniformly worse performance in fetching and iterating over items than a boring old b-tree, which makes it a total nonstarter for most workloads I care about.
It is also sort of ironic that one of the key performance callouts is a lack of pointer chasing, but the Go implementation is a slice that contains other slices without making sure they are using the same backing array, which is just pointer chasing under the hood. I have not examined the code closely, but it is also probably what let them get rid of the black array as a performance optimization.
`defer` is obviously not implemented in this way, it will re-order the code to flow top-to-bottom and have fewer branches, but the control flow is effectively the same thing.
In theory a compiler could implement `comefrom` by re-ordering the basic blocks like `defer` does, so that the actual runtime evaluation of code is still top-to-bottom.
I submitted a suggestion to add the sophisticated multi-engine FOSS soft synth that I use, Yoshimi (https://yoshimi.sourceforge.io/) which is a linux only fork of ZynAddSubFX.
Vital is a wave table synth; Helm is a subtractive synth.
Helm was the first synthesizer that I really excelled with. I would recommend anyone who wants to actually learn the fundamentals of synthesis, to start on it. Once you get good at to it, it's faster to dial in the exact sound you want than to reach for a preset.
It's far more straightforward and less complicated than additive (ZynAddSubFX), FM, or wave table synths.
That being said, if you just want a very advanced synth with a lot of great presets, Vital is far more advanced.
No, you cannot. Our abstract language abilities (especially the written word part) are a very thin layer on top of hundreds of millions of years of evolution in an information dense environment.