In the many darker timelines that one can extrapolate, capturing essential tech stacks is just a pre-cursor to capturing hiring.
Once we start seeing Open AI and Anthropic getting into the certifications and testing they'll quickly become the gold standard. They won't even need to actually test anyone. People will simply consent to having their chat interactions analyzed.
The models collect more information about us than we could ever imagine because definitionally, those features are unknown unknowns for humans. For ML, the gaps in our thinking carry far richer information about is than our actual vocabularies, topics of interest, or stylometric idiosyncrasies.
As if there will be hiring in the fullness of time.
There will come a day when you can will an entire business into existence at the press of a button. Maybe it has one or two people overseeing the business logic to make sure it doesn't go off the rails, but the point is that this is a 100x reduction in labor and a 100,000x speed up in terms of delivery.
They'll price this as a $1M button press.
Suddenly, labor capital cannot participate in the market anymore. Only financial capital can.
Suddenly, software startups are no longer viable.
This is coming.
The means of production are becoming privatized capital outlays, just like the railroads. And we will never own again.
There is nothing that says our careers must remain viable. There is nothing that says our output can remain competitive, attractive, or in demand. These are not laws.
Knowledge work may be a thing of the past in ten years' time. And the capital owners and hyperscalers will be the entirety of the market.
If we do not own these systems (and at this point is it even possible for open source to catch up?), we are fundamentally screwed.
I strongly believe that people not seeing this - downplaying this - are looking the other way while the asteroid approaches.
There could be opportunities we haven't anticipated.
What if labor organizes around human work and consumers are willing to pay the premium?
At that point, it's an arms race against the SotA models in order to deepen the resolution and harden the security mechanisms for capturing the human-affirming signals produced during work. Also, lowering the friction around verification.
In that timeline, workers would have to wear devices to monitor their GSR and record themselves on video to track their PPG. Inconvenient, and ultimately probably doomed, but it could extend or renew the horizon for certain kinds of knowledge work.
Oh are compilers going away? Or personal computers for that matter?
If the barrier to button-pressed companies goes that high up, the cost to run/consume the product also goes up. Making hand-rolled products cheaper.
Slower paced to roll out things? Sure.
That's the precarious balance these LLMs providers have to make. They can't just move on without the people feeding it data and value. The machine is not perpetual.
In the words of Mark Fisher (in the words of Zizek?): it seems that its easier to imagine the end of the world than it is to imagine the end of capitalism.
Location: Royal Palm Beach, Florida
Remote: Yes
Willing to relocate: No
Technologies: TypeScript, React, Node.js, PHP, SQL, Redis
Résumé/CV: Available upon request
Email: dietrich [dot] stein {at-symbol} gmail [dot] com
Hi, I'm Dietrich. I'm building a product or two, but I'm also open to freelance opportunities. I like to build solutions using a little bit of everything. I also love a challenge.
Please see my X/Twitter feed this year if you're curious. I might have posted 20 screenshots of different tools and toys I've built recently:
With AI, I practice disciplined task decomposition, with strategy and plan documents for every phase. And, I deeply question each plan proposal and review every change manually when I'm not tackling something like a nasty segfault.
Examples:
- Lasgun: My fully self-hosting TypeScript (subset) compiler that emits x86-64 with ELF binaries, designed turing incomplete JSON-capable IR
- DeepMojo CLI: My personal fork of Gemini CLI that helps me transition between local and remote inference
- Dis: My isomorphic Redis-clone in TypeScript, taking this into production soon
> There is every reason to believe that those who invest in deep understanding will continue to be valuable, regardless of what tools emerge.
I don't take issue with this, except that it's a false comfort when when you consider the demand will naturally ebb and individual workload will naturally escalate. In that light, I find it downright dishonest because the rewards for attaining deep knowledge will continue to evaporate; necessitating AI-assistance.
The reason is it different this time around is because the capabilities of LLMs have incentivized the professional class to betray the institutions that enabled their specializations. I am talking about the amazing minds at Adobe, Figma, and the FAANGS who are bridging agentic reasoners and diffusion models with domain-specific needs of their respective professional users.
Humans are class of beings, and the humans accelerating the advance of AI in creative tools are the reason that things are different this time. We have class traitors among us this time, and they're "just doing their jobs". For most, willful disbelief isn't even a factor. They think they're helping while each PR just brings them closer to unemployment.
Most of these "class traitors" live in high cost of living areas, and for them, the choice is "become unemployed within two weeks for not complying", or "become unemployed within a few years for complying". They are being betrayed by the shareholder class, and they in turn are betraying their customers and their species.
The only thing that we can do is to not make it worth their time in the long run. Don't let greed and fear slide. Don't hate someone for choosing their family and comfort over your own, hate the system that forces them to make that choice. Hold them accountable, but attack the system, instead of its hostages and victims.
The level of compliance and enthusiasm varies. Some believe they are making the world a better place. Some feel they're adding value but suspect they are trapped within a cycle they refuse to examine. Some are more connected to the truth, and comply willingly but resentfully.
Where you fall depends on where you work and what you work on.
You make a great points about the chain of accountability. But, in my opinion, working professionals are the only agents in the system with the potential to realize their own culpability and divert their actions.
Perhaps, it isn't fair to point to them and call them traitors. Still, they are the only ones with enough agency to potentially organize and collectively push for the kind of ethics that could save us all.
Bridging software with domain-specific needs of its professional users is nothing new: that is how domain-specific professional software gets built. What is new is that the people doing this are being referred to hysterically as "class traitors", when the improvements they're working on will bring massive and widely available benefits to professionals the world over.
While the desire is not new, advancements in LLMs and diffusion models have made this sort of bridging effective and attractive to an unprecedented degree.
Those massively and widely available benefits will continue to deflate the value of human intelligence until even most of innovators currently working on them lose their seats at the table too.
I was 7 in 1987, learned LOGO and C64 BASIC that year, and I relate to this article as well.
It feels as though a window is closing upon the feeling that software can be a powerful voice for the true needs of humanity. Those of us who can sense the deepest problems and implications well in advance are already rare. We are no more immune to the atrophy of forgetting than anyone.
But there is a third option beyond embrace or self-extinguish. The author even uses the word, implying that consumers wanted computers to be nothing more than an appliance.
The third option is to follow in the steps of fiction, the Butlerians of Dune, to transform general computation into bounded execution. We can go back to the metal and create a new kind of computer; one that does have a kind of permanence.
From that foundation, we can build a new kind of software, one that forces users to treat the machine as appliance.
It has never been done. Maybe it won't even work. But, I need to know. It feels meaningful and it has me writing my first compiler after 39 years of software development. It feels like fighting back.
This proposal feels really vague to me, I don't really understand what this actually does. Can you explain more? What exactly is a computer with permanence? What is software that forces a user to treat the computer it runs on "as an appliance"? In what ways is this different from any general-purpose computer, and what's the reason why a user would pick this over something standard?
I mean "permanence" in the same vague senses that I think the OP was hinting upon. A belief that regardless of change, the primitives remain. This is about having total confidence that abstractions haven't removed you the light-cone of comprehension.
Re: Appliance
I believe turing-completeness is over-powered, and the reason that AGI/ASI is a threat at all. My hypothesis is that we can build a machine that delivers most of the same experiences as existing software can. By constraint, some tasks would impossible and others just too hard to scale. By analogy, even a Swiss-army knife is like an appliance in that it only has a limited number of potential uses.
Re: Users
The machine I'm proposing is basically just eBPF for rich applications. It will have relevance for medical, aviation, and AI research. I don't suppose that end-users won't be looking for it until the bad times really start ramping up. But, I suppose we'll need to port Doom over to it before we can know for sure.
> We can go back to the metal and create a new kind of computer; one that does have a kind of permanence.
it's kind of strange to think about but i guess now there's a new incentive to do something truly new and innovative. The llms won't be able to do it for you.
My goal isn't to make LLM-assistance impossible; it will still be possible. In fact, GPT2-level inference is one of launch demos I have planned if I can finish this cursed self-hosting run.
My goal is to make training (especially self-training) impossible; while making inference deterministic by design and highly interpretable.
The idea is to build a sanctuary substrate where humans are the only beneficiaries of all possible technical advancements.
Thanks. The hardest part has been slogging through the segfaults and documenting all the unprincipled things I've had to add. Post-bootstrap, I have to undo it all because my IR is a semantically rich JSON format that is turing-incomplete by design. I'm building a substrate for rich applications over bounded computation, like eBPF but for applications and inference.
I never appreciated tailwind until AI models revealed it as such a token-efficient way transport styles between models and other use-cases. AI aruably hurts demand for their premium offering the same way it hurts demand for junior devs.
I'll sometimes ask Claude Sonnet 4.5 for JS and TS library recommendations. Not for "latest" or "most popular". For this case, it seems to love recommending promising-looking code from repos released two months ago with like 63 stars.
Seems like the opinion of someone who doesn't know that OpenAI cloned Anthropic's innovations of artifacts and computer use with their "canvas" and "operator".
Those are applied-ML level advancements, OpenAI has pushed model level advancements. xAI has never really done much it seemed except download the latest papers and reproduce them.
Don't forget that OpenAI was also following Anthropic's lead at the model level with o1. They may have been first with single-shot CoT and native tokens, but advancements from the product side matter, and OpenAI has not been as original there some would like to believe.
I suppose it is time to finally apply to YC for DeepMojo. With over 40 packages in our monorepo, after a year of ramping up, we recently launched on Windows, and achieved MacOS support internally. Secure, local first, zero-trust AI, with opted-out defaults, no user telemetry, no ads, and no internet required.
Once we start seeing Open AI and Anthropic getting into the certifications and testing they'll quickly become the gold standard. They won't even need to actually test anyone. People will simply consent to having their chat interactions analyzed.
The models collect more information about us than we could ever imagine because definitionally, those features are unknown unknowns for humans. For ML, the gaps in our thinking carry far richer information about is than our actual vocabularies, topics of interest, or stylometric idiosyncrasies.
reply