Did not know of the "thinkism" expression. When I was studying in France eng. school, I called that "the mythe du cerveau" (literaly "the brain myth", though does not roll on your tongue as well).
It is guaranteed failure mode of large orgs. Curious to hear about more references on how to fight this at an organization level, besides the one given in the OT.
The main point of mythical man month was that communication cost across people was the main cost as project grow in complexity.
So increasing individual output by itself is not enough to affect the argument. It could, if you also reduce the size of people needed for a project, where people are everyone included in the project, not just SWE. But there are strong forces in large orgs to pull toward larger project sizes: budgeting overhead and other similar large orgs optimize for legibility kind of arguments.
IMO the only way this will change is when new companies will challenge existing big guys. I think AI will help achieve this (e.g. agentic e-commerce challenging the existing players), but it will take time.
Indeed. I would add a third factor to compute and datasets: the lego-like aspect of NN that enabled scalable OSS DL frameworks.
I did some ML in mid 2000s, and it was a PITA to reuse other people code (when available at all). You had some well known libraries for SVM, for HMM you had to use HTK that had a weird license, and otherwise looking at experiments required you to reimplement stuff yourself.
Late 2000s had a lot of practical innovation that democratized ML: theano and then tf/keras/pytorch for DL, scikit learn for ML, etc. That ended up being important because you need a lot of tricks to make this work on top of "textbook" implementation. E.g. if you implement EM algo for GMM, you need to do it in the log space to avoid underflow, DL as well (gorot and co initialization, etc.).
I think your post may have more acronyms than any other post I have ever read on hn. Do you have a guide to which specific things you are talking about with each acronym? Deep Learning and Machine Learning are obvious but some of the others I can’t follow at all - they could be so many different things.
I agree. It is difficult to convince leadership to do this work at all ("it works on my example, ship it"), and in my experience most DS don't even want to do it.
One of the key value is that it forces some thinking about what is the task you want to solve in the first place. In many cases, it is difficult if not impossible to do it, which implies the underlying product should not be built at all. But nobody wants to hear that.
Doing eval only makes sense if making the product better impacts something the business cares about, which is very difficult to do in practice.
The typical solution is to work in one of the "global" (aka American) companies in Japan: google, amz, apple, ms, etc. At least for now there are enough jobs across all those companies for motivated foreigners, though that could change.
My rule of thumb is that management complexity is given by #direct reports x #project, where project is defined as a set of stakeholders (be it PM, etc. depending on business).
Concretely, managing 12 ICs on a well defined platform team w/ a single PM is much easier than managing 6 people working across 6 businesses, as is more common when managing a team of data scientists.
I can believe it is deliberate at the top, I've certainly seen first hand in several orgs I've worked at.
My sense is that unless actively managed against, any org big enough to have a financial department and financial planning will work under assumption of fungibility.
You had to accept some license terms before you could download the VST SDK. When linux audio started to get "serious" 20 years ago, it was a commonly discussed pain point.
Concretely, it made distributing OSS VST plugins a pain. Especially for Linux which generally will want to build their packages.
Note that his was the VST2 era. VST3 was commercial license or GPL 3, which was an improvement, but only slightly, because it excluded open-source software released under the GPL 2, and also MIT/BSD/whatever-licensed software couldn't use it (without effectively turning the whole software into GPL-licensed software).
It is guaranteed failure mode of large orgs. Curious to hear about more references on how to fight this at an organization level, besides the one given in the OT.
reply