Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a side note, while I know of several language model based systems that have been deployed in companies, some companies don't want to talk about it:

1. Its still perceived as an issue of competitive advantage

2. There is a serious concern about backlash. The public's response to finding out that companies have used AI has often not been good (or even reasonable) -- particularly if there was worker replacement related to it.

It's a bit more complicated with "agents" as there are 4 or 5 competing definitions for what that actually means. No one is really sure what an 'agentic' system is right now.



There is a very simple and obvious definition: it's agentic if it uses tool calls to accomplish a task.

This is the only one that makes sense. People want to conflate it with their random vague conceptions of AGI or ASI or make some kind of vague requirement for a certain level of autonomy, but that doesn't make sense.

An agent is an agent and an autonomous agent is an autonomous agent, but a fully autonomous agent is a fully autonomous agent. An AGI is an AGI but an ASI is an ASI.

Somehow using words and qualifiers to mean different specific things is controversial.

The only thing I will say to complicate it though is if you have a workflow and none of the steps give the system an option to select from more than one tool call, then I would suggest that should be called an LLM workflow and not an agent. Because you removed the agency by not giving it more than one option of action to select from.


Agentic AI comes out of historical AI, systems computing and further back biological/philosophical discussion. It's not about tool use although ironically, animal tool use is a fascinating subject not yet corrupted by the hype around intelligence.

I implore you to look into that to see how some people relate it to autonomy or AGI or ASI(wrongly, imo - I think shoehorning OOP and UML diagrams plus limited database like memory/context is not a path to AGI. Clever use of final layers, embeddings and how you store/weight them (and even more interesting combinations) may yield interesting results because we can (buzzword warning) transcend written decoding paradigms - the Human brain clearly does not rely on language).

However what gets marketed today is, as you say, not capable of any real agent autonomy like in academia - they are just self-recursive ChatGPT prompts with additional constraining limits. One day it might be more, but libraries now are all doing that from my eye. And recursion has pros but emphasizes the unreliability con/negative of LLMs.


That's not the definition I see most people using. Plenty of tool calling going on purely to structure an output, which could also be achieved by prompting.

For me, agentic means that at least at some stage, one model is prompting another model.


This has been my experience. Lots of companies are implementing LLMs but are not advertising it. There's virtually no upside to being public about it.


Investors at throwing money at ai projects. That is one upside.


Very accurate. So much of succesful (from company PoV) real-world LLM use is about replacing employees. HN is still far too skeptical about how much this is in fact happening and is likely to accelerate.


This is exactly it except also, the use cases are so constrained as to be hardly using LLMs at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: