I was bored so I did the math and you are not correct. Even if you don't care about the people themselves, a normal citizen in an industrialized society like Israel has about 40 years of working life. Let's assume for simplicity that some rockets would hit children but others would hit retired people, on average hitting people when they're halfway through their career and would have 20 years of productive work left.
According to Wikipedia [1], Israel has an average GDP per capita of about 60 USD per hour worked, which at 40 hours per week, 50 weeks worked per year over 20 years comes to about 40000 hours of work and ~2.4 million USD of GDP generated. At an income tax of about 30% [2], that means an income for the state of about 800k USD equivalent. If the person dies due to rocket attack, the state would miss out on that. Iron dome interceptors are quite cheap compared to that and the laser intercepts should be an order of magnitude cheaper still.
This doesn't even take into account the sunk costs that industrialized nations incur by every citizen having to attend school for about the first two decades of their lives, mostly funded by the state. That represents a tremendous investment into human capital that would be lost if you let your citizens get shot up in preventable rocket attacks.
So no, human lives are not actually cheap when viewed through the lens of a country, even when completely excluding morals and only looking at it financially. They are in fact quite valuable.
This essay, like so many others, mistakes the task of "building" software with the task of "writing" software. Anyone in the world can already get cheap, mass-produced software to do almost anything they want their computer to do. Compilers spit out new build of any program on demand within seconds, and you can usually get both source code and pre-compiled copies over the internet. The "industrial process" (as TFA puts it) of production and distribution is already handled perfectly well by CI/CD systems and CDNs.
What software developers actually do is closer to the role of an architect in construction or a design engineer in manufacturing. They design new blueprints for the compilers to churn out. Like any design job, this needs some actual taste and insight into the particular circumstances. That has always been the difficult part of commercial software production and LLMs generally don't help with that.
It's like thinking the greatest barrier to producing the next great Russian literary novel is not speaking Russian. That is merely the first and easiest barrier, but after learning the language you are still no Tolstoy.
> What software developers actually do is closer to the role of an architect in construction or a design engineer in manufacturing. They design new blueprints for the compilers to churn out. Like any design job, this needs some actual taste and insight into the particular circumstances. That has always been the difficult part of commercial software production and LLMs generally don't help with that.
As Bryan Cantrill commented (quoting Jeff Bonwick, co-creator of ZFS): code is both information about the machine and the machine:
Whereas an architect creates blueprints which is information, that gets constructed into a building/physical object, and a design engineer also creates documents that are information that get turned into machine(s), when a developer writes code they are generating information that acts like a machine.
Software has a duality of being both.
How does one code and not create a machine? Produce a general architecture in UML?
I think what Cantrill is getting at here is that a running program necessarily consists of both code and hardware. If the software is missing, the hardware will be idling. If the hardware is not present, then the software will be just bytes on a storage device. It's only the combination of hardware and software that makes a working system.
What software developers produce is not a machine by itself. It's at most a blueprint for a machine that can be actualized by combining it with specific hardware. But this is getting a bit too philosophical and off track: LLMs can help produce source code for a specific program faster, but they are not very good at determining whether a specific program should be built at all.
You're getting caught up on the technical meaning of terms rather than what the author actually wrote.
Theyre explicitly saying that most software will no longer be artisianal - a great literary novel - and instead become industrialized - mass produced paperback garbage books. But also saying that good software, like literature, will continue to exist.
Yes, I read the article. I still think it's incorrect. Most software (especially by usage) is already not artisanal. You get the exact same browser, database server and (whatsapp/signal/telegram/whatever) messenger client as basically everyone else. Those are churned out by the millions from a common blueprint and designed by teams and teams of highly skilled specialists using specialized tooling, not so different from the latest iPhone or car.
As such, the article's point fails right at the start when it tries to make the point that software production is not already industrial. It is. But if you look at actual industrial design processes, their equivalent of "writing the code" is relatively small. Quality assurance, compliance to various legal requirements, balancing different requirements for the product at hand, having endless meetings with customer representatives to figure out requirements in the first place, those are where most of the time goes and those are exactly the places where LLMs are not very good. So the part that is already fast will get faster and the slow part will stay slow. That is not a recipe for revolutionary progress.
I think the author of the post envisions more code authoring automation, more generated code/test/deployment, exponentially more. To the degree what we have now would be "quaint", as he says.
Your point that most software uses the same browsers, databases, tooling and internal libraries is a weakness, a sameness that can be exploited by current AI, to push that automation capability much further. Hell, why even bother with any of the generated code and infrastructure being "human readable" anymore? (Of course, all kinds of reasons that is bad, but just watch that "innovation" get a marketing push and take off. Which would only mean we'd need viewing software to make whatever was generated readable - as if anyone would read to understand hundreds/millions of generated complex anything.)
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
There is a large mass of unwritten software. It would add value but it is too bespoke to already have an open source solution. Think about a non-profit organization working with proprietary file formats and databases. They will be able to generate automation tools that they could otherwise not afford. This will be repeated over and over. This is what I think the author is getting at.
> You get the exact same browser, database server and (whatsapp/signal/telegram/whatever) messenger client as basically everyone else.
Hey! I'm going to passionately defend my choice over a really minor difference. I mean do you see how that app does their hamburger menu?! It makes the app utterly unusable!
Maybe I'm exaggerating here but I've heard things pretty close in "chrome vs Firefox" and "signal vs ..." threads. People are really passionate about tiny details. Or at least they think that's that they're passionate about.
Unfortunately I think what they don't realize is that passion often hinders that revolutionary progress you speak of. It just creates entrenched players and monopolies in domains where it should be near trivial to move (browsers are definitely trivial to jump ship)
> It just creates entrenched players and monopolies in domains where it should be near trivial to move (browsers are definitely trivial to jump ship)
I think this is understating the cost of jumping. Basically zero users care about the "technological" elements of their browser (e.g. the render engine, JS engine, video codecs) so long as it offers feature equivalence, but they do care a lot about comparatively "minor" UX elements (e.g. password manager, profile sync, cross-platform consistency, etc) which probably actually dominate their user interaction with the browser itself and thus understandably prove remarkably sticky ("minor" here is in terms of implementation complexity versus the rest of a browser).
I guess two things can be true at the same time. And I think AI will likely matter a lot more than detractors think, and nowhere near as much as enthusiasts think.
Perhaps a good analogy is the spreadsheet. It was a complete shift in the way that humans interacted with numbers. From accounting to engineering to home budgets - there are few people who haven't used a spreadsheet to "program" the computer at some point.
It's a fantastic tool, but has limits. It's also fair to say people use (abuse) spreadsheets far beyond those limits. It's a fantastic tool for accounting, but real accounting systems exist for a reason.
Similarly AI will allow lots more people to "program" their computer. But making the programing task go away just exposes limitations in other parts of the "development" process.
To your analogy I don't think AI does mass-produced paperbacks. I think it is the equivalent of writing a novel for yourself. People don't sell spreadsheets, they use them. AI will allow people to write programs for themselves, just like digital cameras turned us all into photographers. But when we need it "done right" we'll still turn to people with honed skills.
I think existing skilled programmers are leveraging AI to increase productivity.
I think there are some people with limited, or no, programming experience who are vibe coding small apps out of nothing. But I think this is a tiny fraction of people. As much as the AI might write code, the tools used to do that, plus compile, distribute etc are still very developer focused.
Sure, one day my pastor might be able to download and install some complete environment which allows him to create something.
Maybe it'll design the database for him, plus install and maintain the local database server for him (or integrate with a cloud service.)
Maybe it'll get all the necessary database and program security right.
Maybe it'll integrate well with other systems, from email to text-import and export. Maybe that will all be maintainable as external services change.
Maybe it'll be able to do support when the printing stops working, or it all needs to be moved to a new machine.
Maybe this environment will be stable enough for the years and decades that the program will be used for. Maybe updating or adding to the program along the way won't break existing things.
Maybe it'll work so well it can be distributed to others.
All this without my pastor even needing to understand what a "variable" is.
That day may come. But, as well as it might or might not write code today, we're a long long way from this future. Mass producing software is a lot more than writing code.
We could have LLM’s capable of doing all that for your pastor right now and it would still take time before these systems can effectively reason through troubleshooting this bespoke software. Right now the effectiveness of LLLM-powered troubleshooting software platforms relies upon the gravity induced by millions of programmers sharing experiences upon more or less the same platforms. Gigabytes to terabytes of text training data on all sorts of things that go bonkers on each platform.
We are now undergoing a Cambrian explosion of bespoke software vibe coded by a non-technical audience, and each one brings with it new sets of failure modes only found in their operational phase. And compared to the current state, effectively zero training data to guide their troubleshooting response.
Non-linearly increasing the surface area of software to debug, and inversely decreasing the training data to apply to that debugging activity will hopefully apply creative pressure upon AI research to come up with more powerful ways to debug all this code. As it stands now, I sure hope someone deep into AI research and praxis sees this and follows up with a comment here that prescribes the AI-assisted troubleshooting approach I’m missing that goes beyond “a more efficient Google and StackOverflow search”.
Also, the current approach is awesome for me to come up to speed on new applications of coding and new platforms I’m not familiar with. But for areas that I’m already fluent in and the areas my stakeholders especially want to see LLM-based amplification, either I’m doing something wrong or we’re just not yet good at troubleshooting legacy code with them. There is some uncanny valley of reasoning I’m unable to bridge so far with the stuff I’m already familiar with.
“Garbage books” are mass-printed, but aren’t mass-written in a mass production sense. Mass production is about producing fairly exact copies of something that was designed once. The design part has always remained more artisanal than industrial. It’s only the production based on the design (or manuscript) that is industrial.
The difference with software is that software is design all the way down. It only needs to be written once, similar to how a mass-produced item needs only be designed once. The copying that corresponds to mass production is the deployment and execution of the software, not the writing of it.
Isn't this already the case? Your company doesn't build its own word processor, they license it from Microsoft, or they pay Google for G Suite, or whatever. Great books are sold in paperback, after all.
The syntactic representation will become that. End of day it's just math ops, state sync of memory and display. Even semantic objects like an OSs protected memory is a special case of access control that can be mathematically computed around. There is nothing important about special semantics.
The user experience will be less constrained as the self arrangement of pixels improves and users do not run into designer constraints, usually due to lack of granularity some button widget or layout framework is capable of.
"Artisanal" software engineers probably never were their own self selected identity.
Have been writing code since the late 80s, when Windows and commercial Unix were too expansive and we all wrote shoddy but functional kernels. Who does that now? Most gigs these days are glue code to fetch/cache deps and template concrete config values for frameworks. Artisanal SaaS configuration is not artisanal software engineering.
And because software engineers were their own worst enemy the last decade; living big as they ate others jobs and industries; hate for the industry has gone mainstream. Something politicians have to react to. Non-SWEs don't want to pay middle men to use their property. GenAI can get them to that place.
As an art teacher once said; making things for money is not the practice of a craft. It's just capitalism. Anyone building SaaS apps through contemporary methods is a Subway sandwich artist, not the old timey well rounded farmer, hunter, who also bakes bread.
What he's missing is that there's always been a market for custom-built software by non-professionals. For instance, spreadsheets. Back in the 1970s engineers and accountants and people like that wrote simple programs for programmable calculators. Today it's Python.
The most radical development in software tools I think, would be more tools for non-professional programmers to program small tools that put their skills on wheels. I did a lot of biz dev around something that encompassed "low code/no code" but a revolution there involves smoothing out 5-10 obstacles with a definite Ashby character that if you fool yourself that you can get away with ignoring the last 2 required requirements you get just another Wix that people will laugh at. For now, AI coding doesn't have that much to offer the non-professional programmer because a person without insight into the structure of programs, project management and a sense of what quality means will go in circles at best.
I think the thinking in the article is completely backwards about the economics. I mean, the point of software is you can write it once and the cost to deploy a billion units is trivial in comparison. Sure, AI slop can put the "crap" in "app" but if you have any sense you don't go cruising the app store for trash but find out about best-of-breed products or products that are the thin edge of a long wedge (like the McDonald's app which is valuable because it has all the stores baacking it)
This was already true before LLMs. "Artisinal software" was never the norm. The tsunami of crap just got a bit bigger.
Unlike clothing, software always scaled. So, it's a bit wrongheaded to assume that the new economics would be more like the economics of clothing after mass production. An "artisanal" dress still only fits one person. "Artisanal" software has always served anywhere between zero people and millions.
LLMs are not the spinning jenny. They are not an industrial revolution, even if the stock market valuations assume that they are.
Agreed, software was always kind of mediocre. This is expected given the massive first mover advantage effect. Quality is irrelevant when speed to market is everything.
Unlike speed to market it doesnt manifest in an obvious way but I've watched several companies lose significant market share because they didnt appreciate software quality.
I've worked for a lot of people involved in the process happily request their software get turned into spaghetti. Often because some business process "can't" be changed, but mostly because decision makers do not know / understand what they're asking in a larger scheme of things.
A good engineer can help mitigate that, but only so much. So you end up with industrial sludge to some extent anyway if people in the process are not thoughtful.
> It's like thinking the greatest barrier to producing the next great Russian literary novel is not speaking Russian.
The article is very clearly not saying anything like that. It's saying the greatest barrier to making throwaway comments on Russian social media is not speaking Russian.
Roughly the entire article is about LLMs making it much cheaper to make low quality software. It's not about masterpieces.
And I think it's generally true of all forms of generative AI, what these things excel at the most is producing things that weren't valuable enough to produce before. Throwaway scripts for some task you'd just have done manually before is a really positive example that probably many here are familiar with.
But making stuff that wasn't worth making before isn't necessarily good! In some cases it is, but it really sucks if we have garbage blog posts and readmes and PRs flooding our communication channels because it's suddenly cheaper to produce than whatever minimal value someone gets out of hoisting it on us.
As others have said, you're missing the author's point. The author is claiming that the act of writing software is getting industrialized by LLMs. LLMs will produce small, useful, but completely disposable programs that under the previous "artisanal" model would normally take me or another programmer an hour or so to write or debug. Or for something a bit more complicated, it can be vibe coded in 10 minutes, whereas it otherwise would have taken 10 hours to write and debug. You wouldn't want to use this sort of software extensively or for very long, just like you probably wouldn't frame a photo posted on social media. It might just be something to do some random task with your computer that is nontrivial that no other software tool does out of the box.
> It's like thinking the greatest barrier to producing the next great Russian literary novel is not speaking Russian. That is merely the first and easiest barrier, but after learning the language you are still no Tolstoy.
And what do you feel is the role of universities? Certainly not just to learn the language right? I'm going through a computer engineering degree and sometimes I feel completely lost with an urge to give up on everything, even though I am still interested in technology.
I'm kinda hoping that eventually each ractor will run in it's own ruby::box and that each box will get garbage collected individually, so that you could have separate GCs per ractor, BEAM-style. That would allow them to truly run in parallel. One benefit should be to cut down p99 latency, since much fewer requests would be interrupted by garbage collection.
I'm not actually in need of this feature at the moment, but it would be cool and I think it fits very well with the idea of ractors as being completely separated from each other. The downside is of course that sharing objects between ractors would get slower as you'd need to copy the objects instead of just sharing the pointer, but I bet that for most applications that would be negligible. We could even make it so that on ractor creation you have to pass in a box for it to live in, with the default being either a new box or the box of the parent ractor.
They already truly run in parallel in Ruby 4.0. The overwhelming majority of contention points have been removed in the last yet.
Ruby::Box wouldn't help reducing contention further, they actually make it worse because with Ruby::Box classes and modules and an extra indirection to go though.
The one remaining contention point is indeed garbage collection. There is a plan for Ractor local GC, but it wasn''t sufficiently ready for Ruby 4.0.
I know they run truly parallel when they're doing work, but GC still stops the world, right?
Assuming you mean "because with Ruby::Box classes and modules have an extra indirection to go though." in the second paragraph, I don't understand why that would be necessary. Can't you just have completely separate boxes with their own copies of all classes etc, or does that use too much memory? (Maybe some COW scheme might work, doodling project for the holidays acquired haha)
Anyway, very cool work and I hope it keeps improving! Thanks for 4.0 byroot!
Yes, Ractor local GC is the one feature that didn't make it into 4.0.
> Can't you just have completely separate boxes with their own copies of all classes etc, or does that use too much memory?
Ruby::Box is kinda complicated, and still need a lot of work, so it's unclear how the final implementation will be. Right now there is no CoW or any type of sharing for most classes, except for core classes.
Core classes are the same object (pointer) across all boxes, however they have a constant and method table for each box.
But overall what I meant to say is that Box wouldn't make GC any easier for Ractors.
Yes, you are correct. But actually, I am not claiming someone claimed it :) I am actually trying to get at the idea, that the "business people" usually bring up, that they are looking after the user's/customer's interest and that others don't have the "business mind", while actually when it comes to this kind of decision making, all of that is out of the window, because they want to shift the blame.
A few steps further stepped back, most of the services we use are not that essential, that we cannot bear them being down a couple of hours over the course of a year. We have seen that over and over again with Cloudflare and AWS outages. The world continues to revolve. If we were a bit more reasonable with our expectations and realistic when it comes to required uptime guarantees, there wouldn't be much worry about something being down every now and then, and we wouldn't need to worry about our livelihood, if we need to reboot a customer's database server once a year, or their impression about the quality of system we built, if such a thing happens.
But even that is unlikely, if we set up things properly. I have worked in a company where we self-hosted our platform and it didn't have the most complex fail-safe setup ever. Just have good backups and make sure you can restore, and 95% of the worries go away, for such non-essential products, and outages were less often than trouble with AWS or Cloudflare.
It seems that either way, you need people who know what they are doing, whether you self-host or buy some service.
That's more a small business owner perspective. For a middle manager rattling some cages during a week of IBM downtime is adequate performance while it is unclear how much performative response is necessary if mom&pops is down for a day.
I've definitely built the same piece of software hundreds of times over, probably thousands. I've even set up CI to automate the build process.
The problem is that the construction equivalent of a software developer is not a tradesman but an architect. Programs are just blueprints that tell the compiler what to build.
Maybe you should re-read the "do things that don't scale" article. It is about doing things manually until you figure out what you should automate, and only then do you automate it. It's not about doing unscalable things forever.
Unless you have a plan to change the laws of physics, space will always be a good insulator compared to what we have here on Earth.
Tigerbeetle is very cool and I would love to see more of it. AFAIR they have been hinting that you could in theory plug in storage engines different from the debit/credit model they've using for some time. Has any of this materialized? I would love to use it but just don't have any bookkeeping to do at the scale where bringing in Tigerbeetle would make sense. :(
It is the other way around --- it is _relatively_ easy to re-use the storage engine, but plug your custom state machine (implemented in Zig). We have two state machines, an accounting one, and a simple echo one here: https://github.com/tigerbeetle/tigerbeetle/blob/main/src/tes....
I am not aware of any "serious" state machine other than accounting one though.
Gleam is really quite a nice language. I did AoC in it this year as well and came away with the following: (incomplete list for both positive and negative, these are mainly things that come to mind immediately)
Positive:
- It can be pretty performant if you do it right. For example, with some thought I got many days down to double digit microseconds. That said, you do need to be careful how you write it and many patterns that work well in other languages fall flat in Gleam.
- The language server is incredibly good. It autoformats, autocompletes even with functions from not-yet-imported-but-known-to-the-compiler packages, shows hints with regarding to code style and can autofix many of these, autofills missing patterns in pattern matches, automatically imports new packages when you start using them and much much more. It has definitely redefined my view of what an LSP can do for a language.
- The language is generally a joy to work with. The core team has put a lot of effort into devex and it shows. The pipe operator is nice as always, the type system is no haskell but is expressive enough, and in general it has a lot of well-thought out interactions that you only notice after using it for a while.
Negative:
- The autoformatter can be a bit overly aggressive in rewriting (for example) a single line function call with many arguments to a function call with each argument on a different line. I get that not using "too much" horizontal space is important, but using up all my vertical space instead is not always better.
- The language (on purpose) focuses a lot on simplicity over terseness, but sometimes it gets a little bit much. Having to type `list.map` instead of `map` or `dict.Dict` instead `Dict` a hundred times does add up over the course of a few weeks, and does not really add a lot of extra readability. OTOH, I have also seen people who really really like this part of Gleam so YMMV.
- Sometimes the libraries are a bit lacking. There are no matrix libraries as far as I could find. One memoisation library had a mid-AoC update to fix it after the v1.0 release had broken it but nobody noticed for months. The maintainer did push out a fix within a day of realizing it was broken though. The ones that exist and are maintained are great though.
I can live with these negatives. What irritates me the most is the lack of if/else or guards or some kind of dedicated case-distinction on booleans. Pattern matching is great but for booleans it can be kinda verbose. E.g.
case x < 0 {
True -> ...
False ->
case x > 10 {
True -> ...
False ->
case x <= 10 {
True -> ...
False -> ...
}
}
}
You most likely asked an AI for this. They always think there is an `if` keyword in case statements in Gleam. There isn't one, sadly.
EDIT: I am wrong. Apparently there are, but it's a bit of a strange thing where they can only be used as clauses in `if` statements, and without doing any calculations.
> - It can be pretty performant if you do it right. For example, with some thought I got many days down to double digit microseconds.
Was this the time of everything or just the time of your code after loading in the text file etc.?
The hello world starter takes around 110 ms to run on my PC via the script generated with `gleam export erlang-shipment` and 190 ms with `gleam run`.
Is there a way to make this faster, or is the startup time an inherent limitation of Gleam/the BEAM VM?
The time reported by the "gladvent" package when running with the "--timed" option. AFAICT that does not count compilation (if needed), VM startup time, any JITting happening, or reading in the text file. I'm fine with that tbh, I'm more interested in the time actually spent solving the problem. For other languages I wouldn't count language-specific time like compilation time either.
As to whether you can make startup time faster, I suppose you could keep a BEAM running at all times and have your CLI tools hotswap in some code, run it, and get the results back out or something. That way you can skip VM startup time. Since the BEAM is targeted more at very long-running (server) processes with heaps and heaps of concurrency, I don't think ultrafast startup time is really a focus of it.
> Having to type `list.map` instead of `map` or `dict.Dict` instead `Dict` a hundred times does add up over the course of a few weeks, and does not really add a lot of extra readability.
I did it in F# this year and this was my feeling as well. All of the List.map and Seq.filter would have just been better to be called off of the actual list or Seq. Not having the functions attached to the objects really hurts discoverability too.
Re argument formatting, I'd guess it's because it uses the Prettier algorithm which works like that.
However in my experience it's much better than the alternative - e.g. clang-format's default "binpack"ing of arguments (lay them out like prose). That just makes them hard to read and leads to horrible diffs and horrible merge conflicts.
From the wiki about IEX: "It was founded in 2012 in order to mitigate the effects of high-frequency trading." I can see how they don't want to track internal latency as part of that, or at least not share those numbers with outsiders. That just encourages high frequency traders again.
One would hope for a more technical solution to HFT than willful ignorance lol. For example, they could batch up orders every second and randomize them.
I worked in HFT. (Though am now completely out of fintech and have no skin in the game). "Flash Boys" traditional HFT is dead already, the trade collapsed in 2016-2018 when both larger institutions got less dumb with order execution, and also several HFTs "switched sides" and basically offered "non-dumb order execution" as a service to any institutions who were unable to play the speed game themselves. Look at how Virtu's revenue changed from mostly trading to mostly order execution services over that time period.
Flash Boys was always poorly researched and largely ignorant of actual market microstructure and who the relevant market participants were, but it also aged quite poorly as all of their "activism" was useless because the market participants just all smartened up purely profit-driven.
If you want to be activist about something, the best bet for 2026 is probably that so much volume is moving off the lit exchanges into internal matching and it degrades the quality of price discovery happening. But honestly even that's a hard sell because much of that flow is "dumb money" just wanting to transact at the NBBO.
Actually, here's the best thing to be upset about: apps gamifying stock trading / investing into basically SEC-regulated gambling.
This is what should happen, because what the game actually being played is to profit off those who cannot react fast enough to news event, rather than profit off those who mispriced their order.
Or leave things in place, but put a 1 minute transaction freeze during binary events, and fill the order book during that time with no regard for when an order was placed, just random allocation of order fills coming out of the 1 minute pause.
These funds would lose their shit if they had to go back to knowledge being the only edge rather than speed and knowledge.
This isn't a good approach because it assumes there are no market makers on trading venues, and that they (as well as exchanges) do not compete for order flow. Also, maybe you haven't noticed, but stocks are often frozen during news announcements by regulatory request, so such pauses are already in place and are designed to maintain market integrity, not disrupt it with arbitrary fills.
According to Wikipedia [1], Israel has an average GDP per capita of about 60 USD per hour worked, which at 40 hours per week, 50 weeks worked per year over 20 years comes to about 40000 hours of work and ~2.4 million USD of GDP generated. At an income tax of about 30% [2], that means an income for the state of about 800k USD equivalent. If the person dies due to rocket attack, the state would miss out on that. Iron dome interceptors are quite cheap compared to that and the laser intercepts should be an order of magnitude cheaper still.
This doesn't even take into account the sunk costs that industrialized nations incur by every citizen having to attend school for about the first two decades of their lives, mostly funded by the state. That represents a tremendous investment into human capital that would be lost if you let your citizens get shot up in preventable rocket attacks.
So no, human lives are not actually cheap when viewed through the lens of a country, even when completely excluding morals and only looking at it financially. They are in fact quite valuable.
[1] https://en.wikipedia.org/wiki/List_of_countries_by_labour_pr... [2] https://en.wikipedia.org/wiki/Taxation_in_Israel#Income_tax
reply