AI performs best in non-deterministic environments where highly extensive if slightly imperfect (or even hallucinatory) knowledge works just fine. When mapped onto today’s jobs, the fit feels less natural for high-level engineering than for “looser” tasks that would do well to be armed with wider knowledge. In other words, it seems like AI—or AI-armed humans—are more squarely aimed at executives.
Delegating executive decision making to what is essentially an automated form of Reddit and stack overflow seems like it could possibly lead to bad results.
For me, LLMs have been a very useful interface to tutorials for ramping up on new areas. That’s about it, IME so far. I suppose the executive equivalent would be as an interface to business books, case studies, etc. With all the variance in such a high dimensional space, probably higher dimensional than starter tier tech projects in an area, I can’t imagine that it would actually be very useful when the long run results are considered.
What do you think they’re being used for right now?