We’re a startup and focussed on easy horizontal scalability early on. From a management point of view premature optimizations are cost intensive, especially while we’re still figuring out what our product is and how it works. At the end there are a lot of changes all the time to functionality and implementation and optimizations early get wasted very quickly. We rather purchase new servers. A powerful server cost 100 USD/month and wehe development team can focus on implementing features and functionality which moves our product forwar d, optimizations are opportunity losses at the beginning of a product. Of course, as soon as bad performance impacts user experience it’s a different story and also if you start to have hundreds or thousands of servers but then you’ll know what your product is doing and changes become smaller, focused and code optimizations start to make sense.
That's not an argument not to optimize, that's an argument to consider the costs & implications of optimization. Failing to do that now can cost much more later on when compute resources need balloon and product scope/codebase is much larger, making optimizations more labor intensive.
There's no yes/no on this. It's all on a slider, a spectrum.
optimizations that are not premature are a win though, and by making it standard to consider performance issues from the outset, you and tour team get better at it, such that it doesn’t cost time to default to pretty efficient solutions.
also some basic tech choices can often get you large constant factor wins without any downsides, like using a languages that are at least jitted rather than interpreted.
It is a tautology because the word "premature" means "before you have to" in this context. It's like saying "You don't need to do things before you need to need to do them." It's true by definition.
The question is: which optimizations are premature and which aren't?
I'd say it's not only a tautology, it's plain false.
Cheap optimizations that are known to work (compiling in release mode, enabling HTTP cache in your web server, using a fast sorting algorithm on large arrays...) are good even if you don't need them right now.
But the question still valid, which are and which aren't (premature)? For experts: avoiding n+1 query, using hash map rather than loop match, caching in memory, using smaller data type, all are standard optimization techinques.
Now if junior doesn't implement them, are they didn't do required optimization, or do they do premature optimization if they did?
"premature" is in the eye of the beholder... (or implementer). I've cleaned up a lot of slow code over the last few years, and in most cases the 'fixes' are pretty simple. The pushback was typically along the lines of "YAGNI" and "premature optimization is the root of all evil" quotes, yet... yeah, you know what, we ARE "gonna need it", and adding a table index on a column you're querying on in a where clause isn't "premature". It's industry standard practice.
I think what the others are trying to say is that 'premature' is generally a label applied in hindsight.
The preparations you made that turned out to make your life easier, the things you regret not doing, and the things that ended up being a total waste of time.
What about the cost of putting shitty, slow software into the world?
How about having respect for you user's time? Depending on how many users you have, shaving seconds or milliseconds off your response time will save humanity hundreds, thousands, or millions of hours waiting for your software to do something.
It depends on the product and the user/customer impact, if it doesn’t matter that it takes 200ms or 300ms why do you care? Also, if it matters and you can scale horizontally to improve performance do that first instead of spending valuable developers hours.
I think it's difference in the hacker/product mindset and the engineer mindset.
For hackers or product focused devs, making something work is the most important aspect whereas for engineering focused devs, it hurts to see such large inefficiencies that are solvable.
I empathize with the engineer mindset, but definitely align more with the hacker/product mindset.
After your first sentence I thought you'd continue like this:
For a hacker, making something fast and cool is the most important aspect, whereas for engineers making careful tradeoffs between effort and customer impact are the main focus.
I think you could swap 'hacker' and 'engineer' in your sentence and it would still make sense.
I'm not saying, "engineer bad, hacker good", just that we tend to value "good" code, architecture, performance, etc highly and sometimes that is to our detriment.
I feel the latter is just not being able to see the big picture.
People in the latter mindset don't get that providing real value to users is what matters.
Literally everything boils down to it. Even the example of making a faster application, it's literally only useful in that it increases how quickly you generate user value.
At some point once you generate enough value for your end user, the utility will outweigh a given latency problem.
Even making your server costs cheaper with optimization really matters because you can transform the savings into user value that exceeds the cost of optimizing.
So it really doesn't make sense to thumb your nose down at people who are "just sticking stuff they don't understand together" in a vaccum like they tend to do.
You don't know their runway.
You don't know how concrete their business case is.
You don't know their development budget.
The moment you're making a dichotomy between yourself and "those programmers who just put shiny legos they don't understand together", you're demonstrating a lack of understanding of the bigger picture that development fits in.
Because sometimes hiring someone who has little experience outside clicking those legos together is all that allows an idea to exist.
tl;dr: A service that loads with 100 requests instead of 1 because the developer doesn't know better still generates more value to the end user than one that doesn't exist.
The even bigger picture is that we (as people, who all use various services in our daily life) end up with all these services being slow and frustrating, although they do provide value, resulting in an overall frustrating daily experience.
The non-existence of a bad service allows a better service to come into existence. All too often a market is dominated by some company that got lucky and now has no incentive to improve.
I happen to have it on good authority that there is an endpoint lurking somewhere in our API that has a p99 latency of almost 3 full hours per request. We clearly don't respect our users time. Or our own. Or basic software engineering practices. Sigh.
Yeah client's will timeout for sure after a few minutes. The API doesn't know that though... it'll just sit there wasting an ungodly amount of resource and causing tonnes of noisy neighbor problems. You look in the APM traces and see one with 8 million spans that enumerates the whole database. Just silly. It's easy to be a 10x engineer in an environment where there is a lot of 0.1x engineering :(
Yeah, I think the article makes a good point on the impact that optimization can make but I use the same strategy that you do at my current job. Early on it just makes a lot more sense to focus on what can generate revenue.
Depending on your field, making things work fast may be the difference between it not working, working at all, and willing to be adopted by other practitioners.
Completely right. You are correct in what you should be focusing on. Only very large systems will see a return on investment in optimizing a system for hardware costs (bad code and crummy implementations should be worked out as tech debt). Focus on product, features and serving your customers. Get your market fit and burn your investor capital. Then when you reach that next stage, either sell to a company that is good at management or start hiring for performance.
On the other end of the spectrum, really poorly performing code can affect development, testing and delivery times which eats developer time as well. Some developers or testers might sit oh welling waiting for 1hr job to finish when 1 day of development time could turn that job into a 5min run. That kind of optimization pays for itself before it even makes it to production.
It's not necessarily either/or. In my experience, the really crappy developers who can't write optimized code are also the ones who are really slow and expensive in developing even the most basic features.
Indeed, that’s basically the point as well. I have seen developers hunting for the „right“ way and best performance even to the microsecond level for no good business reason while ignoring the state of a project/product and the required value they should focus on.