I've tried this a couple times in the past. I think the biggest issue I've ever run into is from database modifications. In those cases, instead of merely needing to update my queries in my api server, then I needed to update my client code, and this gets much harder with multiple clients
Of course, this can also be the case when changing the API, but deprecated fields can be faked in that portion of the application to allow existing applications to continue to work.
I think it comes down to where you're managing your database interactions. I don't care about most features in an orm, so I maintain my own that's barely anything more than a SQL generator that won't break every time I change the database.
Overall, I think SQL on the client is excellent - even ideal. And as a matter of fact, I try to develop apis that simply generate a user-specific excerpt of our database, where SQL would be perfect for querying the data locally on the client. Last year I tried running sqlite in the browser for this exact reason, but it was far too heavy.
Anyway, back to the issue - the idea of having to immediately adjust all our client software every time I make a database adjustment is a bit anxiety-inducing. Especially when building fast In a drive toward market fit .
That said, with some great logging that tracks the actual queries and where they come from, it probably wouldn't be so bad. Especially if there's a means to maintain backwards compatibility.
What if the tables your API queried were actually views? You could do the same faking of data during migrations of the underlying data (structure) and the client should not be effected.
What if the "views" your API queries didn't have to be tables? You could just keep "tables" as a backend implementation detail, and 'fake' a different model on top of them that's closer to what the client needs.
I think what's becoming clear in this discussion is that there are some developers (myself definitely included) who would like to use SQL as an interface to plain programming structures outside of actual databases.
I understand postgres provides something like this, but it would be interesting as a library rather than a separate application. I'd love to be able to query hashmaps or arrays of data using SQL.
Maybe this already exists somewhere? My web-search-fu is failing me but I recall reading somewhere that facebook had something like this for querying systems, so maybe such a library exists extracted from that usecase?
> I think what's becoming clear in this discussion is that there are some developers (myself definitely included) who would like to use SQL as an interface to plain programming structures outside of actual databases.
Myself included too. As a side project, I'm currently writing a game (roguelike), where all the game state that would ordinarily be put into an ECS or an object tree is instead stored in in-memory SQLite instance; the game's state management consists of (currently hand-written) SQL queries.
The reason for that is because, in trying to implement yet another version of an ECS-like architecture, I realize I'm essentially reinventing database queries and database indices, so I might as well use the real thing.
For Postgres you mean foreign data wrappers ? Or storing plain (mmappable?) binary data in bytea and creating a pg extension to serialize/deserialize? I'm working on both...
Of course, this can also be the case when changing the API, but deprecated fields can be faked in that portion of the application to allow existing applications to continue to work.
I think it comes down to where you're managing your database interactions. I don't care about most features in an orm, so I maintain my own that's barely anything more than a SQL generator that won't break every time I change the database.
Overall, I think SQL on the client is excellent - even ideal. And as a matter of fact, I try to develop apis that simply generate a user-specific excerpt of our database, where SQL would be perfect for querying the data locally on the client. Last year I tried running sqlite in the browser for this exact reason, but it was far too heavy.
Anyway, back to the issue - the idea of having to immediately adjust all our client software every time I make a database adjustment is a bit anxiety-inducing. Especially when building fast In a drive toward market fit .
That said, with some great logging that tracks the actual queries and where they come from, it probably wouldn't be so bad. Especially if there's a means to maintain backwards compatibility.