I wouldn't agree that LLMs are a higher level of abstraction, but I've found they do help me think at a higher level of abstraction, by temporarily outsourcing cognitive load.
With changes like substantial refactors or ambitious feature additions, it's easy to exceed the infamous "seven things I can remember at once":
* the idea for the big change itself
* my reason for making the change
* the relevant components and how they currently work
* the new way they'll fit together after the change
* the messy intermediate state when I'm half finished but still need a working system to get feedback
* edge cases I'm ignoring for now but will have to tackle eventually
* actual code changes
* how I'm going to test this
Good lab notes, specs etc can help, but it's a lot to keep in mind. In practice these often turn into multi person projects, and communication is hard so that often means delay or drift. Having an agent temporarily worry about
* wiring a new parameter through several layers
* writing a test harness for an untested component
* experimentally adding multibyte character support on a branch
frees up my mental bandwidth for the harder parts of the problem.
The main benefit is to defer the concern until I have a mostly working system. Then I come back and review its output, since I'm still responsible for what it delivers, and I want better than "mostly working".
This is what I've found to be very successful for me. My flavour of ADHD has historically made it hard for me to start new projects as I get very stuck on all of the little details from the start, while also thinking about the high level aspects.
Being able to spend my energy on the architectural decisions and validate my understanding before spending time on optimising the internals has actually allowed me to follow through with some of my designs.
Experimentation is then faster. If the data model wasn't good enough, I can actually experiment with it immediately, before we accidentally ship something to production and then have to deal with a very annoying data migration problem. The exact code doesn't matter to begin with when we just want to make sure the data is efficient to decode and is cache friendly.
I recently built a project I had in my mind for 3 years but could never work on because all the individual components were overwhelming. It involved e2e encryption, consensus, p2p networking, CRDTs, and API design. It was very nice to see it come together. The project ended up failing due to some underlying invariant, so it was nice to validate that and finally get it out of my head.
Does it? Claude the chatbot is available for free, and it can write code, but Claude Code is a separate product that as far as I know is only available on paid plans. Source: https://claude.com/product/claude-code
I guess I mixed the two, but I suppose the point still stands because Anthropic has a free Claude chatbot _and_ OpenAI doesn't have a Claude Code product (does it?).
Google also has aistudio.google.com which is Lovable competitor and its free for unlimited use. That seems to work so much better than gemini CLI even on similar tasks
What an interesting and strange article. The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects. It's not clear what positive suggestions this article offers except to always disregard regulation and build your own thing from scratch, which is ... certainly consistent with the Works In Progress imprint.
The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.
The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".
This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.
I didn't feel like he was refuting the whole discipline. Rather, he seems to admire Forrester and the whole discipline. The argument just seems to be, even with great systems thinking, you can't build a complex system from scratch and that existing complex systems are often hard to fix.
Couldn't one interpret "magical systems thinking" as a fallacy that people may commit when applying systems thinking? More broadly, I find some of the comments here rather harsh, also considering that many observations in the article are intuitively true for anyone whose ever been exposed to bureaucracy on the meta-level.
One could interpret the title that way, but not consistently with the rest of the article, which includes assertions like "in the realm of societies, governments and economies, systems thinking becomes a liability".
I think there's plenty to agree with in the article's descriptions of failure and hubris. What the critical commenters are taking issue with is that the article blames those symptoms on a straw man. It's a persuasive article, not a historical review, so it's reasonable to debate its conclusion and reasoning as well as its supporting evidence.
Exactly, it's a fallacy of systems thinking but it's not intrinsic to it. In fact, system thinkers tend to understand that complex systems, are, well, complex and not easy to reason about.
There is something about Club of Rome to systems thinking that is similar to the Dijkstra's observation about Basic and programming.
Articles debunking them are always full of fundamental misunderstandings about the discipline. (The ones supporting them are obviously wrong.) And people focusing on understanding the discipline never actually refer to them in any way.
Yeah, what they are attempting mg to do in the span of one short essay is equivalent to trying to discredit an entire field of inquiry. Even if you don't think the field is worth anything, it should be obvious that it will take a lot of research and significant argumentation to accomplish that goal, this essay is lacking in both departments.
Maybe anecdotal, but in solution design I have often encountered designs that try to be generic for the sake of generality, also designs that complicate a simple repeatable task to accommodate arbitrary potential complications. I would argue that in many cases we like to introduce complexity to feel like we are doing something advanced. I like to design systems with a view that they do one thing only and that thing right but there is, to your point, arbitrariness and art and judgement in deciding the thing.
Applying the idea of "starting with a simple system that works"̀ to Factorio, Shapez (and now Shapez 2) is like Factorio for abstract geometric shapes and colors.
It's got all the essential elements of Factorio that make it so interesting and compelling, which apply to so many other fields from VLSI design to networking to cloud computing.
But you mine shapes and colors and combine them into progressively more complex patterns!
> The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects.
All valid criticisms, but somehow it sounds exactly like something a member of inept bureaucracy would say.
This seems lazy. It's ad hominem but not even, since you don't know what inept bureaucracy I am defending. Is there any argument that you couldn't level this accusation at?
Apologies. It wasn't intended as ad hominem. I was just describing general vibe of your comment, at least how I perceived it.
When inept bureaucracy is put in the spotlight usually someone pops up to defend how much of important work they are doing and how the things they deal with are just so complicated. And how criticisms are unfair and unfounded.
> assume the system will react to resist intervention
Systems don't do that. Only constituents who fear particular consequences do.
Systems also don't care about levels of complexity. Especially since it's insanely hard to actually break systems that are held together by only the "what the fuck is going on, let's look into that" kind. Hours, days, weeks, later, things run again. BILLIONS lost. Oh, we wish ...
At the end of the day, the term Systems Thinking is overloaded by all the parts that have been invented by so called economists and "the financial industry", which makes me chuckle every time now that it's 2025 and oil rich countries have been in development for decades, the advertisement industry is factory farming content creators and economists and multi-billionaires want more tikktoccc and instagwam to get into the backs of teen heads.
If you are a SWE, systems architect or anything in that sphere, please, ... act like you care about the people you are building for ... take some time off if you can and take care of must be taken care of, ... it's just systems, after all.
You make a fine point. My simplified version of it is that there is no such thing as an isolated system. Things change. A system optimized for one environment is likely to fail when things change. Most of the hugely successful firms of today focus more on controlling their environment than on developing a capacity to adapt to unforeseeable consequences of unforeseen changes in their environment, even the ones that they cause themselves.
I think we were not using the same definition of “system” :)
> there is no such thing as an isolated system.
Very true.
Look no further than evolutionary biology, you see this all the time where extinctions occur because the environment changes such that the system is no longer optimal.
> where extinctions occur because the environment changes such that the system is no longer optimal
What if we looked at the extinct species as constituents that have been removed because they were obsolete in the system? That way, the system remains optimal, without resisting change.
The system of humanity requires a lot. We used to say "survival of the fittest", which meant survival of the fittest and the "most aware", meaning being able to distinguish which survival strategy is the most viable for a given organism.
Fight, flight, freeze, dominance, independence, submission, DIY, DOBUY; the latter are especially interesting given how reduced information about the requirements and the sensitivities of the individual body can cripple your organs to a point that is more beneficial for some interest group than it is to you; in other words: someone can make sure you are stupid enough to be abused for some specific task until you can be discarded of. At this point we don't know if the system will survive more than one period because of the interest group or suffer within one or more periods because of that interest group.
In evolutionary biology, more symbiotic organisms and systems survived a lot longer that those who were less symbiotic, on scales that modern humans can't put into adequate numbers yet.
Isolated systems do exist. They can be isolated and they can self-isolate for various reasons and by various means. This happens even in species/systems we mostly consider mostly unconscious while definitely sentient and aware.
Wear and tear and maintenance, leeching and seeding, putting info and questions into words and lurking; none of these really attach a system to another by default, by design or via behavior, reward and punishment. The rules go beyond that and stretch longer time frames than we account for.
The article seems to think that systems thinking only applies at a certain lower scale. Even bringing up the bullwhip effect, and talking about it in certain kinds of systems is itself systems thinking, just not at the subcomponent level which doesn't show it. Systems thinking is about interactions and context.
Where are the limits of optimization? There are no such things as "systems", these are arbitrary concepts. Where does any system end? Odum learned the hard way and I suspect CS is simply models of models that hide the interconnected nature as a way of isolating values, making money, and claiming the system works.
The deeper question is why create models of a reality in which all models are wrong, but some extract value long enough to create both ecological collapse and poverty? These are the end states or even goals of models in a universe with limited resources to surfaces of planets.
Each optimization is designed to create dystopic conditions. This is obvious.
I'm sorry this is happening to you and to your friend. I have some similar experience and want to share some advice I wish I had heard earlier.
It sounds to me like you did the right thing - situations like this can get worse if left unchecked and have serious consequences for the person in question and those around them. I'm not diagnosing your friend - I'm no expert, and various disorders can have those symptoms - but there are resources out there about (e.g.) mood disorders [1] that might give you some perspective and advice.
Treatment can help, and can make a huge difference. Hospitals are unpleasant but can sometimes be the only way for someone who needs treatment to receive it. I am certainly no legal expert, but I think if he was forcibly committed to a hospital and police were involved, he's unlikely to be released without accepting treatment.
You might find it helpful to join a support group for caregivers (e.g. [2]). In my experience it's common for friends as well as family members to attend those. People will offer resources and advice, as well as just sharing their experience, which can provide perspective and help with feeling lost.
Also consider (if you're not already) finding a therapist of your own. People in one of these episodes can push boundaries, say things to you they wouldn't normally mean, and generally be hard to be around while maintaining your own health and boundaries - particularly if you're invested in trying to help them.
That is the part I do not understand. I have never agreed with any health professionals to be part of his ongoing care. I suspect his family may have done so, but are abandoning their responsibility?
He had agreed to let me visit him in hospital very shortly, before he is discharged. I intend to make it very clear to the staff that I have not agreed to have any official role in his ongoing help.
You're right, you're not his caregiver, or obliged to be. Sorry if it sounded like I was suggesting that.
I doubt the staff would expect or pressure you to take responsibility for him. If anything you might have trouble getting them to even discuss his case with you - different states vary but in some cases they won't share case details without explicit permission from the patient. (If that sounds frustrating given your first hand experience of his symptoms and their progression - I sympathise.)
The support groups in particular may be useful despite that, just because you mentioned he's a housemate, so he may continue to be in your life. When I attended there were spouses, parents, but also just friends who wanted to help out their friend and understand what they were going through, without adopting responsibility for them.
This makes some good points about misuses of these AWS services, but the title is misleading. The article is actually more like "tempting but inadvisable use cases for AWS services".
My employer uses three of these heavily (ElastiCache, Kinesis and Lambda) and we get quite a bit of leverage out of them.
ElastiCache in particular surprised me. At first glance I mistook it for a transparent (and expensive) wrapper around sticking Redis on an EC2 instance, but if your usage is heavy enough to need multi-node clusters (e.g. read replicas or full Redis Cluster), its orchestration features are pretty useful. We can resize instances, fail over to a replica, and reshard clusters, with zero downtime, by clicking a button (or a one-line Terraform change). And never having to install security patches is nice too.
It certainly is expensive, though. (But if you're not willing to pay a premium for managed infra, what are you doing on AWS in the first place?)
I may have lost people with lower confidence in their abilities and a greater fear of failure.
That HBR article I linked in the other thread actually addresses that. Their survey indicates that people are deterred less by lack of confidence in their abilities, and more by lack of confidence in your process to assess their abilities in the absence of a credential. The top-given reason (from both women and men) for not applying was “I didn’t think they would hire me since I didn’t meet the qualifications, and I didn’t want to waste my time and energy.”
Now maybe you're actively looking for people who hustle and won't take no for an answer (which isn't quite the same thing as "confident in their abilities"). Maybe that's your team culture, or your company culture. That's certainly your choice if so.
There is a lot of projection going on here. The simple statement that the requirements are not always (or even often) hard factors that can never be overridden should not be controversial in the least. That's true in all aspects of life.
So if you don't have a degree, you probably shouldn't let that hold you back. And maybe because you don't have a degree you should hustle a bit more than those who do. Again, that shouldn't be controversial.
With changes like substantial refactors or ambitious feature additions, it's easy to exceed the infamous "seven things I can remember at once":
Good lab notes, specs etc can help, but it's a lot to keep in mind. In practice these often turn into multi person projects, and communication is hard so that often means delay or drift. Having an agent temporarily worry about frees up my mental bandwidth for the harder parts of the problem.The main benefit is to defer the concern until I have a mostly working system. Then I come back and review its output, since I'm still responsible for what it delivers, and I want better than "mostly working".
reply