Hacker Newsnew | past | comments | ask | show | jobs | submit | jacques_chester's commentslogin

> db9 is a PostgreSQL-compatible distributed SQL database. Your data is stored in a distributed TiKV cluster, and each database (tenant) gets its own isolated keyspace. [0]

I feel like the lede is a bit buried here, bordering on deceptive.

That or the architecture doc is wrong. Both plausible I guess, in this day and age.

[0] https://db9.ai/docs/sql


Hello, the developer of db9 here. You’re right, that section is indeed a bit too brief. We will add more architecture documentation later. What I wanted to convey is that, unlike a standard PostgreSQL, db9 is more like a pg SQL-compatible layer built on top of a large distributed KV store. I also shared a brief introduction in this tweet, which might help clarify things. https://x.com/dxhuang/status/2032016443114733744


"Compatible" isn't mentioned on the homepage, though, despite multiple opportunities to do so -- "Create, manage and query serverless PostgreSQL", "Run history, status, and metadata live in Postgres", "Full Postgres. Fully typed.".

This lack of detail may cause folks to form the incorrect impression that this is PostgreSQL, or a fork of it, or some module or plugin for it. Folks will be upset to learn that they were misinformed. Some will assign deception as the cause, whether that is true or not.

I think your interests would be best served by trying to make that distinction clear and prominently so. So for example "A PostgreSQL-compatible, fully serverless database", or similar.

I hope I have explained better.


If you're using TiKV why not use TiDB too, which is MySQL compatible?


it has nothing to do with TiDB, db9 is built from scratch.A good way to think about it is that db9 is similar to tidb-server, it provides a PostgreSQL wire protocol and SQL layer, while the actual data lives in the underlying KV layer.


If it is based on TiKV, why is it built from scratch when TiDB exists? To achieve faster boot times? I think the bigger offering here is distributed-, not serverless Postgres!


Wonder if that’s what it is, with a few changes to make it postgres wire compatible


Doltgres actually is a true versioned Postgres under the hood (or MySql).

This sounds really interesting, and I like the ease with which I could spin something up here and get embeddings for sure! But I would think the actual runtime perf of this would be “fine” for some text, but nowhere near Postgres level for all sorts of other stuff, right?

I am a huge fan of Postgres as a database, and of SQL, etc. but I don’t think I understand the benefit of using Postgres’ wire format here since it’s not Postgres behind the scenes. I guess that lets you use psql as the client?


PG compatible means if you built your application or analytics queries for PG SQL, it's very easy to migrate to XYZ database that takes PG SQL as input and returns the same results in most cases. The wire format means you can point your code at the database and get the same responses as normal SQL.

I agree with the commentary above that it's much clearer to describe something as "PG SQL/wire format compatible".


> I don’t think I understand the benefit of using Postgres’ wire format here since it’s not Postgres behind the scenes. I guess that lets you use psql as the client?

clients can continue using PG drivers, so author doesn't need to build and distribute his own drivers for N programming languages and M OSes.


Quick aside, some dutch folks did a more language-y DSL for tax codes, which might be of interest. I don't know if it is still being used, though.

https://resources.jetbrains.com/storage/products/mps/docs/MP...


I would be interested in a layperson summary of how this code deals with whacky distributions produced by computer systems. I am too stupid to understand anything more complex than introductory SPC which struggles with non-normal distributions.


My guess is that it's a response to "Chainguard are growing so fast that VCs have fought each other to give them hundreds of millions in 3 years despite having no AI play".


At Shopify I was the person who first proposed that we needed to stump up $$$ for RubyGems (and only by implication Ruby Central).

This is not what I had in mind and now I'm embarrassed that I helped make it possible.


Sounds like Shopify has some leverage then to open a line of comms with Ruby Central. "Explain yourselves or we will pull funding"


The problem is that Shopify is leveraged by DHH (who is on their board) to be the financial support referenced elsewhere in today’s discourse. Shopify is a bad actor here


If that's the case, sounds contradictory to their status as a 501c3 and could get their tax exempt status pulled if it's true.


I should add, to clarify: I don't work at Shopify anymore and I'm not speaking for them. Purely a personal view.


Oh, this old chestnut. "Just do what the distros do".

OK, sure, let's pencil this out.

Debian has ~1k volunteers overseeing ~20k packages. Say the ratio is 20:1.

npm alone -- not counting other ecosystems, just npm -- has 3 million packages.

So you'd need 150k volunteers. One hundred and fifty thousand unpaid individuals, not counting original authors.

For one repo.

"Nonsense", you riposte. "Only maybe 100k of these packages are worth it!"

Cool, cool. Then you'd need "only" 5 thousand volunteers. Debian maxed out at 1k and it is probably the source of the most-used software in history. But sure, we'll find 5 thousand qualified people willing to do it for free.

Oh, but how do you identify those 100k packages? OK, let's use download count. Or maybe reference count. Network centrality perhaps? Great, great. But some of them will be evicted from this paradise of rigorous repackaging. What replaces them? Oh, shoot, we need humans to go over up to 3 million packages to find the ones we want to keep.

What I need distro boosters to understand is that the universe of what is basically a package manager for large C libraries is at least two orders of magnitude smaller than everything else, bordering on three if you roll all the biggest repos together. The dynamics at language ecosystem scale are simply different. Yelling at the cloud that it should actually be a breeze isn't going to change things.


There are probably 5k libraries and frameworks worth paying attention from OSS community and organization structure similar to Eclipse Foundation or Apache. The rest is either junk, low risk solo maintained project or corporate stuff maintained by someone on salary.


> Oh, this old chestnut. "Just do what the distros do"... The dynamics at language ecosystem scale are simply different.

The reason for the unwieldy scale might be the lack of proper package inspection and maintenance, which the dreaded old chestnuts do provide.

With proper package management, the number of packages will go down while their quality will go up, it's a win-win.

Can that be done for all packages at once? No, just give a mark of quality to the packages whose authors or maintainers cared to move to the new process. The rest produce a warning - "package not inspected for quality". Done!


Glad to hear it's all so simple. So you'll have no problem setting it up and finding thousands of volunteers to help, right?


Yes, I'm perfectly fine with setting up and recruiting volunteers for important software initiatives and no, I'm not going to do that for npm before they fix the mess they themselves created, there are more productive ways to get the job done without using npm. It's good that we have choices.

What I advised doesn't require "thousands of volunteers", you can start with one but that's not going to be me because you might be right - what Linux bistros are doing might be impossible in the npm community given the widespread 'do-first-think-later' attitude. As I said, it's good we have other choices.


I disagree with theses in this piece.

1. "2FA doesn't work". Incorrect. MFA relying on SMS or TOTP is vulnerable to phishing. Token or device based is not. And indeed GitHub sponsored a program to give such tokens away to critical developers.

In 2021.

2. "There's no signing". Sigstore support shipped in like 2023.

The underlying view is that "Microsoft isn't doing anything". They have been. For years. Since at least 2022, based on my literal direct interactions with the individuals directly tasked to do the things that you say aren't or haven't been done.

I have no association with npm, GitHub or Microsoft. My connection was through Shopify and RubyGems. But it really steams me to see npm getting punched up with easily checked "facts".


Most of the biggest repositories already cooperate through the OpenSSF[0]. Last time I was involved in it, there were representatives from npm, PyPI, Maven Central, Crates and RubyGems. There's also been funding through OpenSSF's Alpha-Omega program for a bunch of work across multiple ecosystems[1], including repos.

[0] https://github.com/ossf/wg-securing-software-repos

[1] https://alpha-omega.dev/grants/grantrecipients/


Trunk-based development, by itself, is a fool's errand.

But combine it with TDD & pairing and it becomes a license to deliver robust features at warp speed.


I don’t follow. Regardless of where you merge, are you not developing features on a shared branch with others? Or do you just have a single long development branch and then merge once “you’re done” and hope that there’s no merge conflicts? But regardless, I’m missing how reviews are being done.


It's not for everyone. Some people have excellent reasons why it isn't workable for them. Others have had terrible experiences. It takes a great deal of practice to be a good pair and, if you don't start by working with an experienced pair, your memories of pairing are unlikely to be fond.

However.

I paired full-time, all day, at Pivotal, for 5 years. It was incredible. Truly amazing. The only time in my career when I really thrived. I miss it badly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: