Sorry if I didn’t use the correct terms. Didn’t catch up on all the terminology coming from my native language. ;) But yes, I agree, the fact that parts, different parameters, of the model can be completed asynchronous by streaming the output of the model, is quite unique. Apple/swift was late with async/await, but putting it all together, it probably plays well with the ‘never’ (I know ) asynchronous and reactive coding.
I have this toy agent I'm writing, I always laugh that I, human, write a code that generates human-readable markdown, that I feed to llm where I ask it to produce a json, so I can parse (by code I, or it wrote) and output in a consistent human-readable form.
I'm thinking about let it output freeform and then use another model to use to force that into structured.
I've found this approach brings slightly better result indeed. Let the model "think" in natural language, then translate it's conclusions to Json. (Vibe checked, not benchmarked)
I believe it could be true because I think training dataset contained a lot more yaml than json. I mean...you know how much yaml get churned out every second?
Well partially generated content streaming thing is great and I haven't seen it anywhere else.