At a recent AI workshop management made clear that they see AI as rendering sprints and scrums obsolete, that Kanban makes a lot more sense, and that estimating effort/story-points is also becoming meaningless. Which is a strong silver lining if you ask me.
I think it's to do with the bottleneck shifting away from code generation and towards specifying and reviewing and integrating code. The process of working with AI agents to produce specs, tech specs, code, and reviews lends itself more to a flow-based structure (like kanban).
Bear in mind this is a B2B enterprise company with a mix of legacy and greenfield. And management has invested heavily into designing a robust spec/context-based workflow for using agents. Might be different elsewhere.
Personally I don't think scrums, planning, retros etc were better than kanban even before AI, at least if you have switched-on, motivated and smart people on your team. They actually made things less agile, and story-points give a false sense of predictability. Imo the crucial factor may be that AI agents are smart and switched-on (with the right context).
Its a good excuse to move away from a shitty process, I'll take it! Fuck SCRUM, fuck Agile. No one was doing it anyway. I had to quit an Agile job because I was shipping shit without ever getting a lick of feedback, and this was not some webdev low stakes work, it was for planning expensive real world installations.
> How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies.
You might feel great, thats fine, but I dont.
And software quality is going down, I wouldn't agree that LLMs will help write better software
> I wouldn't agree that LLMs will help write better software
Your statement makes no sense.
Even if you don't let LLMs author a single line of your code, they can still review it, find edge cases you didn't think about or suggest different approaches.
The fact that AI allows lots of slop, does not negate its overall utility in good informed hands.
The balance between 'find edge cases' and 'hallucinate non-existing cases and waste your time' may be negative. LLMs are also not free, they cost significant money even today, when they are subsidized by marketing budget.
It was already shown repeatedly in GitHub repositories in the last year that authors are really unhappy with AI generated pull-requests and test cases.
That is capitalism capitializing. I sorta think it is also the computer going from a geek toy to mass adoption and incentives changing. 3D printers for example are good but if they go mainstream they'll become like HP 2D printers on the enshittification axis
The coding standard is quality code and one should bring it with themselves comming into the company. And if you mean linter & formatting rules then if company is not young then their elders had a fist fight to settle once and for all one standard, zip it into a file and everybody just use it
reply