One way to measure this that isn't moral judgements is the ecological depth of an environment. If one part of the system is destroyed (e.g. a blight on plant A) how devastating to the rest of the system will that be?
One of the hallmarks of human engineered environments is how shallow and fragile they are. Changes, like the reintroduction of wolves, are "good" because they give us deeper and more resilient environments
The story I've heard is that general motors down fall began when they stopped writing loans for other car manufacturers. They used to under write basically every car loan in the US.
I've never looked into how truthful it is, but it smacks of idiotic/arrogant executive tropes so well I almost don't want to discover it's false
Oh, I see, I wasn't clear. I meant specifically a policy underwriter, as opposed to something like GMAC of old. I've updated my original comment with the necessary adjective.
Looking over GMAC's history via Wikipedia, I find they did underwrite insurance for a time and that no other US insurance company originated within an automaker. That suggests to me Tesla's operating model here is indeed about as unusual in context as I originally - well, intuited, I suppose. I think that's interesting.
No other auto maker before them (to my knowledge) has built their own grid of gas stations, no auto maker went to the efforts that Tesla did to vertically integrate, no auto maker completely ignored the "classic" model of independent dealerships to sell and service their cars... it makes sense that they're about the first ones (at least in America) to change how the financial side works as well.
Oh, good grief, lots of companies have done things like this before. What do you think it took to drive adoption of automobiles in days before the infrastructure was ubiquitously available to support them, the first time? Have you never wondered why the premier award for quality in classical European cuisine shares its name with a brand of tire? On that side of things they're working out of a playbook older than you and me put together.
Lots of people have been "innovating finance" lately, too, not just Tesla. How has that been working out? I hear it's going great...But we were really talking about insurance in any case, the financialization of which has also I believe gone quite poorly well within living memory. Are you of age to recall the name "AIG?" 'I thought not...it's not a story the Sith would tell you.'
I'll throw my hat in the ring for "disagree". Few kinds of work have such clear and unambiguous results as optimization work (its now X% faster!). Few kinds of work have such incredible and detailed tools to guide your hand in finding where to invest your effort (look at this hot loop!).
The fact that sometimes optimization work is tricky or requires some pre-thinking, or is even gasp counter-intuitive is such a hilarious way to say "this is hard". That's just table stakes starting points for so-so-so much work.
Edit: Heck, even deciding whether to prioritize optimization work or feature work is usually a harder problem than the actual optimization work itself.
Sometimes the vibes are wrong, and things go haywire. This is why zero tolerance policies have to be instituted in schools. That doesnt mean the general idea is wrong. Strict adherence to written law will always fail justice. The world is too nuanced and too fractal to handle every edge case well.
I seem to recall that zero-tolerance policies caused the expulsion of multiple Boy Scouts who were merely obeying the requirement that they always have a multifunction pocket tool with them (such as a Victorinox knife), and these caused the Scouts to rescind the rule.
"Zero tolerance" policies are a generally a tool of the Sith.
I suppose it depends on your message volume. To me, processing 100k messages and then getting a page however long later as the broker (or whatever) falls apart sounds much worse than head of line blocking and seeing the problem directly in my consumer. If I need to not do head of line blocking, I can build whatever failsafe mechanisms I need for the problematic data and defer to some other queueing system (typically, just add an attempt counter and replay the message to the same kafka topic and then if attempts > X, send it off to wherever)
I'd rather debug a worker problem than an infra scaling problem every day of the week and twice on Sundays.
It's interesting you say that, since this turned an infra scaling problem into a worker problem for us. Previously, we would get terrible head-of-line throughput issues, so we would use an egregious number of partitions to try to alleviate that. Lots of partitions is hard to manage since resizing topics is operationally tedious and it puts a lot of strain on brokers. But no matter how many partitions you have, the head-of-line still blocks. Even cases where certain keys had slightly slower throughput would clog up the whole partition with normal consumers.
The parallel consumer nearly entirely solved this problem. Only the most egregious cases where keys were ~3000 times slower than other keys would cause an issue, and then you could solve it by disabling that key for a while.
Yeah I'd say kafka is not a great technology if your median and 99ths (or 999ths if volume is large enough) are wildly different which sounds like your situation. I use kafka in contexts where 99ths going awry usually aren't key dependent so I don't have the issues you see.
I tend to prefer other queueing mechanisms in those cases, although I still work hard to make 99ths and medians align as it can still cause issues (especially for monitoring)
Follow on: If you're using kafka to publish messages to multiple consumers, this is even worse as now you're infecting every consumer with data processing issues from every other consumer. Bad juju
> the goal should be to ensure that anyone who wants to do a thing can, with as few third party requirements as possible.
This is a good starting point, but if you have no barriers then you get abuse problems which is why email is terrible. I remember being horrified in the 90s about attempts to charge 1 cent per email. Now I long for a world where that actually happened.
Ironically, the amount of effort I expend dealing with spam from the postal service is much larger than the amount of effort I expend dealing with email.
I’m pretty sure I spend less than 5 minutes a week dealing with physical spam mail. I have a recycling bin right next to where the mail arrives and most days are 15 seconds of “these go into that bin unopened” and sometimes I have to open an envelope and glance at it to see if it’s something relevant to me.
Even with the best spam filtering on email, I’m well over 5 minutes a week of distraction from it.
And now imagine how easy dealing with email spam would be if the marginal fiscal cost was not 0 like physical spam. All the technology and tools available and less than 1% of the viable spam surface area
The past decade is a difficult framing to ask the question in. Notable breakthrough results are usually understood in hindsight and a decade just isn't a lot of time for that context and understanding to develop. Science also doesn't necessarily develop in this way with consistent progress every X timespan. Usually you get lots and lots of breakthroughs all at once as an important paradigm is shattered and a new one is installed. Then observations with tiny differences slowly pile up and a very blurry/messy picture of the problems with the new paradigm takes shape. But none of those things feels like a breakthrough, especially to a layman.
That said: I'll submit the first detection of gravitational waves as two black holes merged together in 2020 as meeting the bar of "notable breakthrough in the last decade".
Ignoring inexperience/incompetence as a reason (which, admittedly, is a likely root cause) domain fuzziness is often a good explanation here. If you aren't extremely familiar with a domain and know the shape of solution you need a-priori all those levels of indirection allow you to keep lots of work "online" while (replacing, refactoring, experimenting) with a particular layer. The intent should be to "find" the right shape with all the indirection in place and then rewrite with a single correct shape without all the indirection. Of course, the rewrite never actually happens =)
One of the hallmarks of human engineered environments is how shallow and fragile they are. Changes, like the reintroduction of wolves, are "good" because they give us deeper and more resilient environments