> All that's left to do now is... the same joins, but inside the backend web server. Because we have to re-assemble these flat outputs into the structure of the page.
This is NOT the case with modern SQL, as it supports JSON. The typical Postgres use-case would be to produce denormalised views for the specific relationships and/or aggregates in JSON form. LATERAL, the correlated sub-query, is really convenient in avoiding double-counting. The rest of the article deals with object-mapping, which is really a poor man's VIEW. Time and time again, software engineers reinvent the data-access wheel out of ignorance to database capabilities.
JSON is very helpful here, and it _happens_ [1] that our HTTP APIs tend to respond with JSON, but I feel SQL's data model is much better suited. The difference is that JSON is hierarchical but SQL relations are flat. However if we let a relation be contained as a value (in a field/cell) then we should be able to produce our hierarchical query result in a natural way (and possibly map that through a `toJSON` method on the API server, if the client wants application/json... otherwise the server might want to do grpc or whatever and getting JSON from the DB is a roundabout pain in the bum).
[1] Actually causality is backwards here - postgres supports JSON only because the Restful API servers that frequently interact with it need to provide JSON to _their_ clients... and so it was a highly demanded feature.
Personally I am using an architecture similar to https://sive.rs/pg in my personal projects and I am very happy with it. It is important that you put data and procedures/views in different schemas so that you can use migrations for your data but automatically delete and recreate the procedures/views during deployment. Also use pgtap or something similar for testing.
That's a cool approach that could work well if you don't need realtime data validation such as in a UI. I would love to find a solution that allows the same validation rules to be used in the DBMS as well as in the backend and frontend code.
I like this point of view but putting logic in the database also has downsides - the tooling in particular is bad. No debugger, hard to write tests, can't canary changes, bad or non-existent autocompletion, lack of linters, etc.
The tooling keeps getting better so I'm bullish on that side.
For canaries there's growing support for branching the database which can help.
But in the end, this like all things requires balance. Putting everything in there can lead to pain down the road, for example I wouldn't follow the http part.
My use case is with app's local sqlite and I have a lot of code transforming the returned rows into JSON. It works but feels slower and to divorced from the data.
> Time and time again, software engineers reinvent the data-access wheel out of ignorance to database capabilities.
Much of this discourse around SQL "not having structure" seems to be about arrogance rather than ignorance. It would take 10 seconds with ChatGPT to resolve this deficit, but for some reason we insist on writing entire blogs and burning hundreds of aggregate hours commenting about a make-believe world wherein views and CTEs don't exist.
This is NOT the case with modern SQL, as it supports JSON. The typical Postgres use-case would be to produce denormalised views for the specific relationships and/or aggregates in JSON form. LATERAL, the correlated sub-query, is really convenient in avoiding double-counting. The rest of the article deals with object-mapping, which is really a poor man's VIEW. Time and time again, software engineers reinvent the data-access wheel out of ignorance to database capabilities.