Something weird happened to software after the 90s or so.
You had all these small-by-modern-standards teams (though sometimes in large companies) putting out desktop applications, sometimes on multiple platforms, with shitloads of features. On fairly tight schedules. To address markets that are itty-bitty by modern standards.
Now people are like “We’ll need (3x the personnel) and (2x the time) and you can forget about native, it’s webshit or else you can double those figures… for one platform. What’s that? Your TAM is only (the size of the entire home PC market circa 1995)? Oh forget about it then, you’ll never get funded”
It seems like we’ve gotten far less efficient.
I’m skeptical this problem has to do with code-writing, and so am skeptical that LLMs are going to even get us back to our former baseline.
1. Personally I find writing software for the web far more difficult/tedious than desktop. We sure settled on the lowest common denominator
1a. Perhaps part of that is that the web doesn't really afford the same level of WYSIWYG?
2. Is it perhaps more difficult (superlinear) to write one cloud SaaS product that can scale to the whole world, rather than apps for which each installation only needed to scale to one client? Oh and make sure to retain perfect separation between clients
2a. To make everything scale, it's super distributed, but having everything so distributed has a huge cost
3. Some level of DLL hell, but something different (update hell?) I barely do any programming in my free time anymore because I would end up spending almost the whole time needing to deal with some barrage of updates, to the IDE, to the framework, to the libraries, to the build infrastructure
3a. There's always a cost to shipping, to the development team and/or the users. With releases so frequent, that cost is paid constantly and/or unpredictably (from the development or user perspective)
3b. Is there any mental sense of completion/accomplishment anymore or just a never-ending always-accelerating treadmill?
3c. I wish I could find the source but there was some joke that said "software developers are arrogant or naïve enough to think that if you take a marathon and just break it up into smaller parts you can 'sprint' the whole way"
> To make everything scale, it's super distributed, but having everything so distributed has a huge cost
Its more then a huge cost, its often insane... We are not talking 10x but easily 100x to a 1000x. Its like when i see some known database makers that scale, write how they can do a million writes per second. Ignoring that they rented a 1000 servers for that, each costing $500 per month. That same software is also 10x to 100x more slower, then a single postgresql database in reads.
So you ask yourself, how many companies do a million writes per second. Few ... How much of those writes may have been reduced by using smarter caching / batching? Probably a factor of 10 to 100x...
The thing i like about scalable solution, is that its way easier to just add a few nodes, vs needing to deal with postgres replication / master setup, when the master node got moved / need to upgraded.
For fun, i wrote my own ART database, and a LSM database using LLMs ... Things do 400 a 500k inserts / second on basic cheap hardware. So wait, why are some companies advertising that they do a million inserts/s, on 500k/month hardware? Some companies may need this ability to scale, as they will not run a 1000 server but maybe 10.000, or more. But 99% of the companies will never even smell close to a 100k inserts/second, let alone a million.
People forget that network latency is a huge thing, but the moment you want consistency and need something like raft, that means now your doing not just 1x the network latency of a write but 4x (send write, verify receive, send commit, verify commit, confirm).
Even something as basic like sqlite vs postgres on the same server, can mean a difference of 3x performance, simply because of then network overhead vs in-function. And that network overhead is just local on the same machine.
Part of this is the huge ZIRP-driven salary bubble in the US. If good software engineers were as cheap as good structural engineers, you'd be able to stick three of them in a room for $500k a year and add a part-time manager and they'd churn out line-of-business software for some weird little niche that saves 50 skilled-employee-years per year and costs $20k a seat.
The bubble means that a) the salaries are higher, b) the total addressable market has to justify those salaries, c) everyone cargo cults the success stories, and so d) the best practices are all based on the idea that you're going to hyperscale and therefore need a bazillion microservices hooked up to multiple distributed databases that either use atomic clocks or are only eventually consistent, distributed queues and logs for everything, four separate UIs that work on web/iOS/android/desktop, an entire hadoop cluster, some kind of k8s/mesos/ECS abomination, etc.
The rest of the world, and apparently even the rest of the US, has engineering that looks a little more like this, but it's still influenced by hyperscaler best practices.
Yep. Software construction was branded a team sport. Hence, social coding, tool quality being considered more important (good thing for sure), and, arguably, less emphasis on individual skill and agency.
This was in service of a time when tech was the great equalizer, powered by ZIRP. It also dovetailed perfectly with middle managers needing more reports in fast growing tech companies. Perhaps the pendulum is swinging back from the overly collective focus we had during the 2010s.
I would make the case as well that software underwent demographic shift as the demand skyrocketed and the barriers to entering the profession with languages and tooling dropped.
80's/90's dev teams were more weird nerds with very high dedication to their craft. Today devs are much more regular people, but there are a lot more of them.
Circa 2005 my boss and I would pitch that the relational database + HTML forms paradigm was a breakthrough that put custom software within reach of more customers: for one thing you could just delete all the Installshield engineers but also memory safety was a big problem in the Win95 era not so much about being hacked and more about application state in applications would get corrupted over time so you just expected Word to crash once an hour or so.
> Something weird happened to software after the 90s or so.
Counterpoint: What might have happened is that we expect software to do a lot more than we did in the 90s, and we really don't expect our software features to be static after purchase.
I agree that we sometimes make things incredibly complex for no purpose in SE, but also think that we do a rose-colored thing where we forget how shitty things were in the 1990s.
> Counterpoint: What might have happened is that we expect software to do a lot more than we did in the 90s, and we really don't expect our software features to be static after purchase.
Outside the specific case of Apple's "magical" cross-device interoperability, I can't think of many areas where this is true. When I step outside the Apple ecosystem, stuff feels pretty much the same as it did in 2005 or so, except it's all using 5-20x the resources (and is a fully enshittified ad-filled disjointed mess of an OS in Windows' case)...
> I agree that we sometimes make things incredibly complex for no purpose in SE, but also think that we do a rose-colored thing where we forget how shitty things were in the 1990s.
... aside from that everything crashes way, way less now than in the '90s, but a ton of that's down to OS and driver improvements. Our tools are supposed to be handling most of the rest. If that improved stability is imposing high costs on development of user-facing software, something's gone very wrong.
You're right that all the instability used to be truly awful, but I'm not sure it's better now because software delivery slowed way down (in general—maybe for operating systems and drivers)
This is exactly what I mean with the rose coloured glasses.
Categorically, I cannot think of a single current software product that existed then, that I would rather be using. 90s browsers sucked, famously. 90s Photoshop is barely useable compared to modern Photoshop. Text editors and word processors are so much better (when was the last time you heard of someone losing all of their work? Now we don't even bother with frequent saving and a second floppy for safety). I can remember buying software n the 1990s and it just didn't install or work, at all, despite meeting all of the minimum specs.
Seriously, go use a computer and software from the 1990s or 2000s, you are forgetting. I'm also not convinced on your assertion that software delivery has slowed down. I get weekly updates on most of my programs. Most software in the 1990s was lucky to get yearly updates...
This needs to be said more. Software used to be so much better, and so was tooling.
While it wasn't perfect, I'd argue software got much worse, and I blame SaaSification and the push for web-based centralization.
Take for example, Docker. Linux suffered from an issue of hosting servers in a standardized, and isolated manner.
The kernel already had all those features to make it work, all we needed was a nice userland to take advantage of it.
What was needed was a small loader program that set up the sandbox and then started executing the target software.
What we got was Docker, which somehow came with its own daemon (what about cron, systemd etc), way of doing IPC (we had that), package distribution format (why not leave stuff on the disk), weird shit (layers wtf), repository (we had that as well), and CLI (why).
All this stuff was wrapped into a nice package you have to pay monthly subscription fees for.
Duplicating and enshittifying standard system functions, what a way to go.
You had all these small-by-modern-standards teams (though sometimes in large companies) putting out desktop applications, sometimes on multiple platforms, with shitloads of features. On fairly tight schedules. To address markets that are itty-bitty by modern standards.
Now people are like “We’ll need (3x the personnel) and (2x the time) and you can forget about native, it’s webshit or else you can double those figures… for one platform. What’s that? Your TAM is only (the size of the entire home PC market circa 1995)? Oh forget about it then, you’ll never get funded”
It seems like we’ve gotten far less efficient.
I’m skeptical this problem has to do with code-writing, and so am skeptical that LLMs are going to even get us back to our former baseline.