> the quality of code produced by AI will improve dramatically as model evolves.
That's a very bold claim. We are already seeing plateu in LLM capabilities in general. And there is little improvement in places where they fall short (like making holistic changes in a large codebase) since their birth. They only improve where they are already good at such as writing small glue programs. Expecting significant breakthroughs with just scaling without any fundamentally changes to the architecture seems like too optimistic to me.
That's a very bold claim. We are already seeing plateu in LLM capabilities in general. And there is little improvement in places where they fall short (like making holistic changes in a large codebase) since their birth. They only improve where they are already good at such as writing small glue programs. Expecting significant breakthroughs with just scaling without any fundamentally changes to the architecture seems like too optimistic to me.