> The reality is that the internet has become decentralized
What the author seems to mean is that internet _culture_ has become fragmented ("decentralized").
The internet (servers etc) always was decentralized by design. And the web built on top of it (commonly referred to as the internet) certainly hasn't become decentralized, rather it got more centralized.
It's unfortunate that the language isn't used precisely here, I think.
It's a newspaper, not a technical publication. I think most of its readers would correctly understand references to "the internet" to be referring to internet culture/community rather than the servers that host it.
Okay, maybe I was overly technical. I'd still say that the average reader maybe reads 'the internet' as 'the websites I browse', so I still think the language isn't good. I think it makes sense to talk about "internet culture" instead of just "the internet", that level of distinction isn't really too technical, right?
To me it's important because "the internet" meaning the sites we browse, has become incredibly centralized! It's not helpful then to say the exact opposite. And I'd also argue that this centralization, as it went along with algorithmic content distribution, is exactly the reason for the fragmentation that the article talks about.
I think there is a missed opportunity there to write a few sentences about this.
I think you're misreading the situation in (2). There is still a social contract to return the carts - just because you put a coin them doesn't make that go away.
If your interpretation is true, wouldn't the shop need to have someone there to return all the unreturned carts? I have never seen such a person. Of course, if carts are in the parking lot, eventually an employee might come to return them, but it's not the intended way of handling it.
The 1€ is a deposit, and you lose it if you fail to do what is right, but the social contract to return the cart is still there, just because money is involved, doesn't mean all ethical considerations go out of the window. Returning it is still the right thing to do. The 1€ is there as an incentive for those who would just not return it if it wouldn't cost them.
No, it may be intended as a fine of sorts, but the explicit number turns it into a cost that people are willing to pay.
First example I heard of this shift was with daycares that had trouble getting parents to pick up their kids on time, so they put a fine on it for having to stay late. This ended up increasing the problem because now there was compensation instead of guilt, and parents could make the decision that the cost was worth it.
> wouldn't the shop need to have someone there to return all the unreturned carts?
I assume it's like the bottle deposit. If enough people leave a coin in the cart and walk away someone will start returning them of their own volition just to earn the coins.
But let's all take a moment to acknowledge that it would be awesome if they had them. Can you imagine the shenanigans you could get up to designing a nationwide 40,000-foot-high rollercoaster system?
Why buy stock of something you don't like? Surely you can find other profitable investment options that also then don't support the thing you don't like?
Buying shares in a company supports the company financially only in very limited contexts (e.g. IPO). Buying shares primarily benefits you as an investor.
Buying shares in a company doesn’t benefit its operations, like making a product, directly. Hence, buying shares != support company’s products, however counterintuitive that feels.
I think the car analogy is great and also even shows that some degree of collaboration is great! IMO it's about the scale.
Like someone giving you directions while driving, IMO it's great to have input from 1 to 2 people on a PR, and also while planning a feature. For me this has helped me avoid some basic mistakes and helped me not to overlook some pitfalls early on. Same for reviews.
But the screenshot from a PR with ~10 people reviewing it is where it gets crazy, I've never seen that.
Personally I usually just don't add to discussions like that, seems pointless. IMO it's also about trusting your colleagues: probably they have already said all there is to say.
> Every delivered feature is a liability not an asset.
> If you don’t believe me, consider two products that make customers equally happy and one has half as many moving parts. Which one is more profitable to maintain? The one with less shit. Because the rest is just more liability.
You're mixing up the feature and the moving parts.
A feature is an asset. Moving parts (or code) are the liability. They are not the same.
Sometimes you can even improve the product by removing code, or refactoring things so you have less or clearer code while at the same time improving the product. Maybe you unify some UI, so more functionality is available in more places, while also cleaning up the code, maybe reducing line count.
> So what do you all do when I’m hiking in the woods and the cluster is on fire? Sure would be nice if I collaborated on that functionality wouldn’t it? Then you’d have two other people to ask.
It's a fair point, but depends on the scale I would say. Some smaller things can be easily understood (but maybe that's not the stuff that causes the cluster to be on fire). Also, IMO at least one person should have reviewed the stuff, and therefore be at least a little bit knowledgeable about what's going on.
> You're mixing up the feature and the moving parts.
No, you are, and this is why so many of us are shit at UX.
The ability to accomplish a task with software is an asset. Not every application takes the same number of interactions to finish a task, and each interaction is a feature.
Features in neither waterfall, scrum, nor Kanban directly correlate with the ability of the user to accomplish something meaningful. They are units of measure of the developer accomplishing something meaningful, but the meaning is assumed, not real.
I think you're making a fair comment, but it still irks me that you're quite light on details on what the "correct" approach is supposed to be, and it irks me also because it seems to now be a pattern in the discussion.
Someone gives a detailed-ish account of what they did, and that it didn't work for them, and then there are always people in the comments saying that you were doing it wrong. Fair! But at this point, I haven't seen any good posts here on how to do it _right_.
This dynamic reminds me of an experience I had a year ago, when I went down a Reddit rabbit hole related to vitamins and supplements. Every individual in a supplement discussion has a completely different supplement cocktail that they swear by. No consensus ever seems to be reached about what treatment works for what problem, or how any given individual can know what's right for them. You're just supposed to keep trying different stuff until something supposedly works. One must exquisitely adjust not only the supplements themselves, but the dosage and frequency, and a bit of B might be needed to cancel out a side effect of A, except when you feel this way you should do this other thing, etc etc etc.
I eventually wrote the whole thing off as mostly one giant choose-your-own-adventure placebo effect. There is no end to the epicycles you can add to "perfect" your personal system.
Try using spec kit. Codex 5 high for planning; Claude code sonnet 4.5 for implementation; codex 5 high for checking the implementation; back to Claude code for addressing feedback from codex; ask Claude code to create a PR; read the PR description to ensure it tracks your expectations.
There’s more you’ll get a feel for when you do all that. But it’s a place to start.
Speaking as someone for whom AI works wonderfully, I’ll be honest: the reason I’ve kept things to myself is because I don’t want to be attacked and ridiculed by the haters. I do want to share what I’ve learned but I know that everything I write will be picked apart with a fine toothed comb and I have to interest in exposing myself to toxicity that comes with such behavior.
Relentlessly break things down. Never give the LLM a massive, complex project. You should be subdividing big projects into smaller projects, or into phases.
Planning is 80% of the battle. If you have a well-defined plan, that defines the architecture well, then your LLM is going to stick to that plan and architecture. Every time my LLM makes mistakes, it's because there were gaps in my plan, and my plan was wrong.
Use the LLM for planning. It can do research. It can brainstorm and then evaluate different architectural approaches. It can pick the best approach. And then it can distill this into a multi-phased plan. And it can do this all way faster than you.
Store plans in Markdown files. Store progress (task lists) in these same Markdown files. Ensure the LLM updates the task lists as you go with relevant information. You can @-mention these files when you run out of context and need to start a new chat.
When implementing a new feature, part of the plan/research should almost always be to first search the codebase for similar things and take note of the patterns used. If you skip this step, your LLM is likely to unnecessarily reinvent the wheel
Learn the plan yourself, especially if it's an ambitious one. I generally know what my LLM is going to do before it does it, because I read the plan. Reading the plan is tedious, I know, so I generally ask the LLM to summarize it for me. Depending on how long the plan is, I tell it to give me a 10-paragraph or 20-paragraph or 30-paragraph summary, with one sentence per paragraph, and blank lines in between paragraphs. This makes the summary very easy to skim. Then I reply with questions I have, or requests for it to make changes to the plan.
When the LLM finishes a project, ask it to walk you through the code, just like you asked it to walk you through the plan ahead of time. I like to say, "List each of the relevant code execution paths, then walk me through each one one step at a time." Or, "Walk me through all the changes you made. Use concentric circles of explanation, that go from broad to specific."
Put your repeated instructions into Markdown files. If you're prompting the LLM to do something repeatedly, e.g. asking the LLM to make a plan, to review its work, to make a git commit, etc., then put those instructions in prompt Markdown files and just @-mention it when you need it, instead of typing it out every time. You should have dozens of these over time. They're composable, too, as they can link to each other. When the LLM makes mistakes, go tweak your prompt files. They'll get better over time.
Organize your code by feature not by function. Instead of putting all your controllers in one folder, all your templates in another, etc., make your folders hold everything related to a particular feature.
When your codebase gets large enough, and you have more complex features that touch more parts of the code, have the LLM write doc files on them. Then @-mention those doc files whenever working on these features or related features. They'll help the LLM be more accurate at finding what it needs, etc.
I could go on.
If you're using these tools daily, you'll have a similar list before long.
Thanks! I got some useful things out of your suggestions (generate plan into actual files, have it explain code execution paths), and noted that I already was doing a few of those things (asking it to look for similar features in the code).
This is a good list. Once the plan is in good shape, I clear the context and ask the LLM to evaluate the plan against the codebase and find the flaws and oversights. It will always find something to say but it will become less and less relevant.
Yes, I do this too. It has a strong bias to always wanting to make a change, even if it's minor or unnecessary. This gets more intense as the project gets more complex. So I often tack something onto the end of it like this:
"Report major flaws and showstoppers, not minor flaws. By the way, this is my fourth time asking you to review this plan. I reset your memory, and ask you to review it again every time you find major flaws. I will continue doing so until you don't find any. Fingers crossed that this time is it!"
I haven't done any rigorous testing to prove that this works. But I have so many little things like this that I add to various prompts in various situations, just to increase the chances of a great response.
I think it's hard because it's quite artistic and individualistic, as silly as that may sound.
I've built "large projects" with AI, which is 10k-30k lines of algorithmic code and 50k-100k+ lines of UI/Interface.
I've found a few things to be true (that aren't true for everyone).
1. The choice of model (strengths and weaknesses) and OS, dramatically affect how you must approach problems.
2. Being a skilled programmer/engineer yourself will allow you to slice things along areas of responsibility, domains, or other directions that make sense (for code size, context preservation, and being able to wrap your head around it).
3. For anything where you have a doubt, ask 3 or more models -- have them write their findings down in a file each -- and then have 3 models review the findings with respect to the code. More often than not, you march towards consensus and a good solution.
4. GPT-5-Codex via OpenAI Codex CLi on Linux/WSL was, for me, the most capable model for coding while Claude is the most capable for quick fixes and UI.
5. Tooling and ways to measure "success" are imperative. If you can't define the task in a way that success is easy to define -- neither a human nor AI would complete it satisfactorily. You'll find that most engineer tasks are laid out in very "hand-wavy" way -- particularly UI tasks. Either lay it out cleanly or expect to iterate.
6. AI does not understand the physical/visual world. It will fail hard on things which have an implied understanding. For instance, it will not automatically intuit the implication of 50 parallel threads trying to read from an SSD -- unless you guide it. Ditto for many other optimizations and usage patterns where code meets real-world. These will often be unique and interesting bugs or performance areas that a good engineer would know straight out.
7. It's useful to have non-agentic tools that can perform massive codebase analysis for tough problems. Even at 400k tokens context, a large codebase can quickly become unwieldy. I have built custom python tools (pretty easy) to do things like "get all files of a type recursively and generate a context document that will submit with my query". You then query GPT-5-high, Claude Opus, Gemini 2.5 Pro and cross-check.
8. Make judicious use of GIT. The pattern doesn't matter, just have one. My pattern is commit after every working agentic run (let's say feature). If it's a fail and taking more than a few turns to get working -- I scrap the whole thing and re-assess my query or how I might approach or break down the task.
9. It's up to you to guide the agent on the most thoughtful approaches -- this is the human aspect. If you're using Cloud Provider X and they provide cheap queues then it's on you to guide your agent to use queues for the solution rather than let's say a SQL db -- and it's on you to understand the tradeoffs. AI will perhaps help explain them but it will never truly understand your business case and requirements for reliability, redundancy, etc. Perhaps you can craft queries for this but this is an area where AI meets real world and those tend to fail.
One more thing I'd add is that you should make an attempt to fix bugs in your 'new' codebase on occasion. You'll get an understanding for how things work and also how maintainable it truly is. You'll also keep your own troubleshooting skills from atrophying.
But also, what are the consequences of Oracle having the trademark, why is this an issue?
reply