Breaking down requirements, functionality and changes into smaller chunks is going to give you better results with most of the tools. If it can complete smaller tasks in the context window, the quality will likely hold up.
My go to has been to develop task documents with multiple pieces of functionality and sub tasks. Build one piece of functionality at a time. Commit, clear context and start the next piece of functionality.
If something goes off the rails, back up to the commit, fix and rebase future changes or abandon and branch.
That’s if I want quality. If I just want to prototype and don’t care, I’ll let it go. See what I like, don’t like and start over as detailed above.
There is a lot of data shuttling or shuffling in enterprise applications and if agents can write that part, so be it. I can spend more time on the harder business and technical problems that require creativity and working through the options and potential solutions. Even here, the speed to write multiple different experiments in parallel, is fantastic.
As for “it-compiles” that is nothing new. I have written code that I go back to later and wonder how it ever compiled. I have a process now of often letting the agent prototype and see if it works. Then go back and re-engineer it properly. Does doing it twice save time? Yeah, cause who’s to say my first take on the problem would have been correct and now I have something to look at and say it is definitely wrong or right when considering how to rebuild it for long term usage.
Okay, so for example I might set something like "this bunch of parameters" immutable, but "this 16kB or so of floats" are just ordinary variables which change all the time?
Or then would the block of floats be "immutable but not from this bit"? So the code that processes a block of samples can write to it, the code that fills the sample buffer can write to it, but nothing else should?
The methods don't mutate the array, they return a new array with the change.
The trick is: How do you make this fast without copying a whole array?
Clojure includes a variety of collection classes that "magically" make these operations fast, for a variety of data types (lists, sets, maps, queues, etc). Also on the JVM there's Vavr; if you dig around you might find equivalents for other platforms.
No it won't be quite as fast as mutating a raw buffer, but it's usually plenty fast enough and you can always special-case performance sensitive spots.
Even if you never write a line of production Clojure, it's worth experimenting with just to get into the mindset. I don't use it, but I apply the principles I learned from Clojure in all the other languages I do use.
It ends up being quite the opposite - many, many bugs come from unexpected side effects of mutation. You pass that array to a function and it turns out 10 layers deeper in the call stack, in code written by somebody else, some function decided to mutate the array.
Immutability gives you solid contracts. A function takes X as input and returns Y as output. This is predictable, testable, and thread safe by default.
If you have a bunch of stuff pointing at an object and all that stuff needs to change when the inner object changes, then you "raise up" the immutability to a higher level.
If you keep going with this philosophy you end up with something roughly like "software transactional memory" where the state of the world changes at each step, and you can go back and look at old states of the world if you want.
Old states don't hang around if you don't keep references to them. They get garbage collected.
Okay, so this sounds like it's a method of programming that is entirely incompatible with anything I work on.
What sort of thing would it be useful for?
The kind of things I do tend to have maybe several hundred thousand floating point values that exist for maybe a couple of hundred thousandths of a second, get processed, get dealt with, and then are immediately overwritten with the next batch.
I can't think of any reason why I'd ever need to know what they were a few iterations back. That's gone, maybe as much as a ten-thousandth of a second ago, which may as well be last year.
It is useful for the vast majority of business processing. And, if John Carmack is to be believed, video game development.
Carmamack's post explains it - if you make a series of immutable "variables" instead of reassigning one, it is much easier to debug. This is a microcosm of time travel debugging; it lets you look at the state of those variables several steps back.
In don't know anything about your specific field but I am confident that getting to the point where you deeply understand this perspective will improve your programming, even if you don't always use it.
Depends on how many times the few more servers are duplicated. Getting an extra server for on-premises installed products is worse than pulling teeth. It’s not one server, it is one server times a thousand.
This has been one of my pet peeves on our projects; how simply can I build a project and run it. Ideally with just Java and git, can I download, build and run. Throw in some projects with native libraries, can I build with compiler tools installed but not building 5 other projects first with environment variables pointing to locations of code.
I used vibe coding to build a UI prototype of workflow. I used mockup images as the basis of the layout and let the agent use Redis as the persistence layer. I know it will be throw away and don't care how it is working underneath as long as it can demonstrate the flow I want.
I have also used prompt driven development to allow the agent to code something I expect to turn into a longer term product. I do more review of the code, ensure it is meeting all standards to development I would expect of myself or any other developer.
There are certainly differing degrees of the two types of development.
The CoPilot plug-in for IntelliJ is not on par with other coding agents at the moment. The usability of the plug-in and integration of ide is a bit annoying to use for any length of time.
That’s if I want quality. If I just want to prototype and don’t care, I’ll let it go. See what I like, don’t like and start over as detailed above.