Hacker Newsnew | past | comments | ask | show | jobs | submit | nulltrace's commentslogin

Downtime is one thing. Silently reverting commits on your default branch is something else entirely.

Preview deploys are even worse. Every PR spins one up with the same env vars and nobody ever cleans them up. You rotate the key, redeploy prod, and there are still like 200 zombie previews sitting there with the old value.

Catching accidental drift is still worth a lot. It's basically the same idea as performance regression tests in CI, nobody writes those because they expect sabotage. It's for the boring stuff, like "oops, we bumped a dep and throughput dropped 15%".

If someone actually goes out of their way to bypass the check, that's a pretty different situation legally compared to just quietly shipping a cheaper quant anyway.


Also it's not just about running an obviously worse quant.

Running different GPU kernels / inference engines also matters. It's easy to write an implementation that is faster and thus cheaper but numerically much noisier / less accurate.


Yeah, the threat model is nonexistent. Most people use a dozen or so well known providers, who have no incentives to so obviously cheat.

Right, metaclass is a ways off. But even without it, just the core reflection is going to save a ton of boilerplate. Half the template tricks I've written for message parsing were basically hand-rolling what `^T` will just give you.

Rebalancing is what really kills you. A CAS loop on a flat list is pretty straightforward, you get it working and move on. But rotations? You've got threads mid-insert on nodes you're about to move around. It gets ugly fast. Skiplists just sidestep the whole thing since level assignment is basically a coin flip, nothing you need to keep consistent. Cache locality is worse, sure, but honestly on write-heavy paths I've never seen that be the actual bottleneck.

Yeah pricing seems okay with batching. The 128MB memory cap per Durable Object is what I'd watch. A repo with a few thousand files and some history could hit that faster than you'd expect, especially during delta resolution on push.

we do alot of work to optimize this on our side :)

Fair, but from the user side it still hurts. Setting up an Ed25519 signing context used to be maybe ten lines. Now you're constructing OSSL_PARAM arrays, looking up providers by string name, and hoping you got the key type right because nothing checks at compile time.

Yeah. Some of the more complex EVP interfaces from before and around the time of the forks had design flaws, and with PQC that problem is only going to grow. Capturing the semantics of complex modes is difficult, and maybe that figured into motivations. But OSSL_PARAMs on the frontend feels more like a punt than a solution, and to maintain API compatibility you still end up with all the same cruft in both the library and application, it's just more opaque and confusing figuring out which textual parameter names to use and not use, when to refactor, etc. You can't tag a string parameter key with __attribute__((deprecated)). With the module interface decoupled, and faster release cadence, exploring and iterating more strongly typed and structured EVP interfaces should be easier, I would think. That's what the forks seem to do. There are incompatibilities across BoringSSL, libressl, etc, but also cross pollination and communication, and over time interfaces are refined and unified.

Lockfiles help more than people realize. If you're pinned and not auto-updating deps, a package getting sold and backdoored won't hit you until you actually update.

The scarier case is Dependabot opening a "patch bump" PR that probably gets merged because everyone ignores minor version bumps.


I mitigate this using a latest -1 policy or minimum age policy depending upon exactly which dependency we're talking about. Combined with explicit hash pins where possible instead of mutable version tags, it's saved me from a few close calls already... Most notably last year's breach of TJ actions

I wish those PRs made by the bot can have a diff of the source code of those upgraded libraries (right in the PR, because even if in theory you could manually hunt down the diffs in the various tags...in practise nobody does it).

No need to hunt it down, there's a URL in the PR / commit message that links to the full diff.

It also doesn't bother checking what's already in your project. Grep around a bit and you'll find three `formatTimestamp` functions all doing almost the same thing.

This clicks for message parsing too. Had field lookups in a std::map, fine until throughput climbed. Flat sorted array fixed it. Turns out cache prefetch actually kicks in when you give it sequential scans.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: