Hacker Newsnew | past | comments | ask | show | jobs | submit | matharmin's commentslogin

I can see a lot of time was put into the report, and it helps to have the detail, but in my mind it glosses over one of the most important parts: The dispute in the stewardship of the bundler and rubygems open-source projects.

As I understand it, Ruby Central controlled the rubygems and bundler github organizations, but did not "own" the projects in the traditional sense - the individual contributers have copyright on the code, and potentially even trademark rights. By then removing access of core maintainers to those projects, they removed access to something they don't "own" themselves.

This is all complicated by the fact that controlling a github organization or repo is different from owning the trademark or copyright. But some of the original maintainers clearly felt they had more of a right to those projects than Ruby Central did.

I believe not clarifying this before making these access changes was the biggest mistake that Ruby Central made, and it's not even mentioned in this report.


I don't have much skin in the game but as a passerby, I agree that the report obviously was made with a lot of time/effort but wouldn't dramatically change someone's view of Ruby Central or assure anyone this won't happen again. This is like writing an outage postmortem without really getting to the root cause and identifying what can be done to prevent in the future.

I think part of that is that it was written from the perspective of the bug that caused the outage ;)

There’s a ton of detail in the report so perhaps I missed it, but yes, the underlying structural/governance flaw of conflating a service, with the IP that runs that service, is a root cause here and seems insufficiently called out. The tragedy of misconception -> misconstruction -> misconfiguration is common when the bridge between governance and engineering is crossed.

The takeaway for the rest of is that separation of such concerns isn’t an abstract notion but needs to be reflected in the mechanical implementation of organisations, lest you get a train wreck later when perspectives don’t align and the whole picture crumbles.


> dispute in the stewardship of the bundler

This was never in dispute from the two parties. Ruby Central and "the maintainers" agreed from the beginning that it was collateral damage. The disagreement was what that meant and what to do with it. Hence the Sept 10 message from the Ruby Central Committee that they should move it to the Ruby core org (which IMO is long overdue).

The original plan (by the oss committee)was to move bundler to the Ruby org, that's what happened. When it did, the community generally like it (on HN and reddit comments).


> individual contributers have copyright on the code, and potentially even trademark

They're not the original authors of Rubygems so it's doubtful they have anything more than copyright on the code they contributed.


IIRC the original authors of rubygems are also the original founders of RubyCentral (chad fowler, david a. black, rich kilmer, jim weirich?), so probably the line was blurrier back then.

What features are you using that the $18/user/month plan doesn't cover?


I don't pay for slack any more, I just picked the price of their enterprise plan. Large users probably get big discounts but it doesn't matter, the cutoff where this makes sense financially is probably around 4000 employees even at $10/seat


The article mentions some sort of legal audit reasons that the author is of the opinion that any reasonably sized company needs. These features are apparently only on the expensive plan.


In my experience, Windows is very far from a "it just works" OS.


It's the ambition as a home user OS though, like macOS. And in the discussion of "it just works" operating systems, who else are we to go by than the vendor ambitions? Personal opinions? In that case, neither is because both struggle to always work in all scenarios since their respective inceptions.


When the phrase originated, manually updating CONFIG.SYS and AUTOEXEC.BAT were expected skills of a home PC owner. The idea of buying a device, plugging it in, and having it work without a complex setup was unheard of. "It just works" on the Mac meant the absence of a DOS layer, IRQs, command lines, etc.


This is still an interesting read, but has anything here changed in the meantime? And out of interest, do other JS engines use the same type of structure to represent properties?


There are a bunch of utilities that don't actually _do_ anything useful. The proxy in this example is used for nothing other than debug logs. The DOM utility layer just slightly reduces the number of LOC to create a DOM node.

And then you end up with consumer code that is not actually declarative? The final code still directly manipulates the DOM. And this shows the simplest possible example - creating and removing nodes. The difficult part that libraries/frameworks solve is _updating_ the DOM at scale.


If that is your source, then Safari was _way_ behind for all of 2025 up until this month, where it suddenly caught up.


True; I was too fascinated by the big green numbers to pay proper attention to the chart below them. Good for them that they finally caught up.


SBOM may contain similar info to lockfiles, but the purposes are entirely different.

Lockfiles tells the package manager what to install. SBOM tells the user what your _built_ project contains. In some cases it could be the same, but in most cases it's not.

It's more complicated than just annotating which dependencies are development versus production dependencies. You may be installing dependencies, but not actually use them in the build (for example optional transitive dependencies). Some build tools can detect this and omit them from the SBOM, but you can't omit these from your lockfile.

Fundamentally, lockfiles are an input to your developement setup process, while SBOM is an output of the build process.

Now, there is still an argument that you can use the same _format_ for both. But there are no significant advantages to that: The SBOM is more verbose, does not diff will, will result in worse performance.


So the lockfile is a superset, but never a subset?

So it basically is an SBOM then but just sometimes has extra dependencies?


Superset of dependencies, but often a subset of info per depedency.


Ah okay! I know Rust has the transitive dependencies did not think/realise all languages might not, good point!


As mentioned in those threads, there is no SQLite WAL corruption if you have a working disk & file system. If you don't, then all bets are off - SQLite doesn't protect you against that, and most other databases won't either. And nested transactions (SAVEPOINT) won't have have any impact on this - all it does in this form is reduce the number of transactions you have.


> working disk & file system

And a working ECC or non-ECC RAM bus, and [...].

How bad is recovery from WAL checksum / journal corruption [in SQLite] [with batching at 100k TPS]?

And should WAL checksums be used for distributed replication "bolted onto" SQLite?

>> (How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?

SQLite would probably still be faster over the network with proper Merkleization


We're relying on logical replication heavily for PowerSync, and I've found it is a great tool, but it is also very low-level and under-documented. This article gives a great overview - I wish I had this when we started with our implementation.

Some examples of difficulties we've ran into: 1. LSNs for transactions (commits) are strictly increasing, but not for individual operations across transactions. You may not pick this up during basic testing, but it starts showing up when you have concurrent transactions. 2. You cannot resume logical replication in the middle of a transaction (you have to restart the transaction), which becomes relevant when you have large transactions. 3. In most cases, replication slots cannot be preserved when upgrading Postgres major versions. 4. When you have multiple Postgres clusters in a HA setup, you _can_ use logical replication, but it becomes more tricky (better in recent Postgres versions, but you're still responsible for making sure the slots are synced). 5. Replication slots can break in many different ways, and there's no good way to know all the possible failure modes until you've run into them. Especially fun when your server ran out of disk space at some point. It's a little better with Postgres 17+ exposing wal_status and invalidation_reason on pg_replication_slots. 6. You need to make sure to acknowledge keepalive messages and not only data messages, otherwise the WAL can keep growing indefinitely when you don't have incoming changes (depending on the hosting provider). 7. Common drivers often either don't implement the replication protocol at all, or attempt to abstract away low-level details that you actually need. Here it's great that the article actually explains the low-level protocol details.


Yeah I was debating heavily between WAL and L/N. Tried to get WAL set up, struggled; tried to learn more about WAL, failed; tried to persevere, shot myself in the foot.

At the end of the day the simplicity of L/N made it well worth the performance degradation. Still making thousands-to-millions of writes per second, so when the original article said they were 'exaggerating' I think they may have gone too far.

I've been hoping WAL gets some more documentation love in the years/decades L/N will serve me should I ever need to upgrade, so please share more! :D


Probably a security feature. If it can access the internet, it can send your private data to the internet. Of course, if you allow it to run arbitrary commands it can do the same.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: