Hacker Newsnew | past | comments | ask | show | jobs | submit | vlowther's commentslogin

My usecase was building an append-only blob store with mandatory encryption, but using a semaphore + direct goroutine calls to limit background write concurrency instead of a channel + dedicated writer goroutines was a net win across a wide variety of write sizes and max concurrent inflight writes. It is interesting that frankenphp + caddy came up with almost the same conclusion despite vastly different work being done.


this makes sense for your workload, but may the right primitive be a function of your payload profile and business constraints ?

in my case the problem doesn't arise because control plane and data plane are separated by design — metadata and signals never share a concurrency primitive with chunk writes. the data plane only sees chunks of similar order of magnitude, so a fixed worker pool doesn't overprovision on small payloads or stall on large ones.

curious whether your control and data plane are mixed on the same path, or whether the variance is purely in the blob sizes themselves.

if it's the latter: I wonder if batching sub-1MB payloads upstream would have given you the same result without changing the concurrency primitive. did you have constraints that made that impractical?


In my case, "background writes" literally means "do the io.WriteAt for this fixed-size buffer in another goroutine so that the one servicing the blob write can get on with encryption / CRC calculation / stuffing the resulting byte stream into fixed-size buffers". Handling it that way lets me keep the IO to the kernel as saturated as possible without the added schedule + mutex overhead sending stuff thru a channel incurs, while still keeping a hard upper bound on IO in flight (max semaphore weight) and write buffer allocations (sync.Pool). My fixed-size buffers are 32k, and it is a net win even there.

right — no variance, question was off target. worth noting though: the sema-bounded WriteAt goroutines are structurally a fan-out over homogeneous units, even if the pipeline feels linear from the blob's perspective. that's probably why the channel adds nothing — no fan-in, no aggregation, just bounded fire-and-forget.

If the performance charts are to be believed, this has uniformly worse performance in fetching and iterating over items than a boring old b-tree, which makes it a total nonstarter for most workloads I care about.

It is also sort of ironic that one of the key performance callouts is a lack of pointer chasing, but the Go implementation is a slice that contains other slices without making sure they are using the same backing array, which is just pointer chasing under the hood. I have not examined the code closely, but it is also probably what let them get rid of the black array as a performance optimization.


That is the most cursed description I have seen on how defer works. Ever.


This is how it would look with explicit labels and comefrom:

    puts("foo");
    before_defer0:
    comefrom after_defer1;
    puts("bar");
    after_defer0:
    comefrom before_defer0;
    puts("baz");
    before_defer1:
    comefrom before_ret;
    puts("qux");
    after_defer1:
    comefrom before_defer1;
    puts("corge");
    before_ret:
    comefrom after_defer0;
    return;
---

`defer` is obviously not implemented in this way, it will re-order the code to flow top-to-bottom and have fewer branches, but the control flow is effectively the same thing.

In theory a compiler could implement `comefrom` by re-ordering the basic blocks like `defer` does, so that the actual runtime evaluation of code is still top-to-bottom.


Microsoft Active Career Copilot 365 ONE, thankyouverymuch.


Grow your .NET-work!


It took quite a bit of scrolling until I found my old faves of dexed and zynaddsubfx, and I didn't see Helm (https://tytel.org/helm/) at all.


I submitted a suggestion to add the sophisticated multi-engine FOSS soft synth that I use, Yoshimi (https://yoshimi.sourceforge.io/) which is a linux only fork of ZynAddSubFX.


Helm has been replaced in practice by Vital (same author), I think.


They are completely different synths.

Vital is a wave table synth; Helm is a subtractive synth.

Helm was the first synthesizer that I really excelled with. I would recommend anyone who wants to actually learn the fundamentals of synthesis, to start on it. Once you get good at to it, it's faster to dial in the exact sound you want than to reach for a preset.

It's far more straightforward and less complicated than additive (ZynAddSubFX), FM, or wave table synths.

That being said, if you just want a very advanced synth with a lot of great presets, Vital is far more advanced.


Surge XT is also at the bottom of the list.


None of the big browsers can be trusted at this point. Firefox is at best the least worst out of the big cross platform browsers.


DOS, early Windows, and early MacOS worked more or less exactly that way. Somehow, we all survived.


Apple nearly didn't survive until they bought a company that made an OS that didn't work that way.


MS Word had to be rewritten from ~scratch to get rid of a document losing crash. Yes, we survived, but at what cost?


No, you cannot. Our abstract language abilities (especially the written word part) are a very thin layer on top of hundreds of millions of years of evolution in an information dense environment.


Sure, but language is the only thing that meaningfully separates us from other great apes


Not it isn't most animals also have a language and humans do way more things differently, than just speak.


> most animals also have a language

Bruh


Only if you are doing in-place updates. If append-only datastores are your jam, writes via mmap are Just Fine:

  $ go test -v
  === RUN   TestChunkOps
      chunk_test.go:26: Checking basic persistence and Store expansion.
      chunk_test.go:74: Checking close and reopen read-only
      chunk_test.go:106: Checking that readonly blocks write ops
      chunk_test.go:116: Checking Clear
      chunk_test.go:175: Checking interrupted write
  --- PASS: TestChunkOps (0.06s)
  === RUN   TestEncWriteSpeed
      chunk_test.go:246: Wrote 1443 MB/s
      chunk_test.go:264: Read 5525.418751 MB/s
  --- PASS: TestEncWriteSpeed (1.42s)
  === RUN   TestPlaintextWriteSpeed
      chunk_test.go:301: Wrote 1693 MB/s
      chunk_test.go:319: Read 10528.744206 MB/s
  --- PASS: TestPlaintextWriteSpeed (1.36s)
  PASS


Before it was an ACID compliant database, it was also the fastest backup solution on the market: https://bofh.bjash.com/bofh/bofh1.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: