It allows these concepts to be expressed legibly for a human. Why would an ai model (not llm necessarily) need to write say "printf"? It does not need to understand that this is a print statement with certain expectation for what a print statement ought to behave as in the scope of the shell. It already has all the information by virtue of running the environment. printf might as well be expressed as some n-bit integer for the machine and dispense with all the window dressing we apply when writing functions by humans for humans.
Right and all of that in the library is built to be legible for the human programmer with constraints involved to fit in within the syntax of the underlying language. Imagine how efficient a function would be that didn't need all of that window dressing? You could "grow" functions out of simulation and bootstrapping, have them be a black box that we harvest output from not much different than say using an organism in a bioreactor to yield some metabolite of interest where we might not know all the relevant pieces of the biochemical pathway but we score putative production mutants based on yield alone.
Indeed. And aside from that, LLMs cannot generalise OOD. There's relatively little training data of complex higher order constructs in straight assembly, compared to say Python code. Plus, the assembly will be target architecture specific.