Mix is absolutely awesome. One of the most carefully organized open source projects that I've seen.
Some years ago I made a Mixxx demo video with a DYI "integrated controller". It demos Linux boot to Mixxx, touch screen, beatmatching and some modest effects:
Yeah, the Emperor's New Clouds. Sometime ago I thought I was crazy and everybody else was right :-) but now we are seeing more and more people doing the numbers.
You could do configuration management in the 90s with rdist well enough on 100 servers. The problem with IT in the early 2000s was that IT departments were shoveling money to companies like VMWare and EMC and overbuilding all their hardware. Amazon built everything on commodity hardware and open source and didn't have vendors acting like financial boat anchors on their scalability. IT departments also usually had massive amounts of friction and gatekeeping to getting anything done, and you needed a design review meeting to spin up a single server for someone. And really I think it was elimination of that centralized control and bureaucracy that made cloud computing so popular.
I love Martin Fowler's one pager called Snowflake server - it aged very well and I still use it as a reference in the cloud era. And this text is also good advice IMO.
What I feel that is missing on what he calls the "Microservices Premium" is a clear statement that this premium is paid in "ops" hours. That changes the game because of the "ops" resource scarcity.
In fact, the microservices dilemma is an optimization problem related to the ops/dev ratio that is being wrongly treated as a conceptual problem.
This is the simplest analysis I could come up with:
Thanks for sharing the story. Despite the whole TCO being higher, I wonder how the 8K to 6K reduction happened.
On AWS, fargate containers way are more expensive than VMs and non fargate containers are kind of pointless as you have to pay for the VMs where they run anyway. Also auto scaling the containers - without making a mess - is not trivial. Thus, I'm curious. Perhaps it's Lambda? That's a different can of worms.
As said, most of their workloads were on cheap VPSs before. Moved some to 'scale-to-zero' solutions, reduced the bloat in VMs, fixed some buggy IaC, also moved some stuff to the serverless scene. That got a decent ~20% reduction.
Also "scalability" is multi dimensional. I've seen, in the same company, infinite scalability in one downstream system whereas the upstream system it depended on was manually feed by fragile human-driven processes because there was not time to fix it. And at the same type the daily ops were "brain frying" because the processes were not automated and not streamlined and the documentation was ambiguous.
So, you had technical scalability in one system but if the customer base grew quickly every other bottleneck would be revealed.
There is more to business operations than technology, it seems.
Some years ago I made a Mixxx demo video with a DYI "integrated controller". It demos Linux boot to Mixxx, touch screen, beatmatching and some modest effects:
https://www.youtube.com/watch?v=DjHvW4OsQ2Y
Mixxx devs: if you are reading this... cheers :-)