Hacker Newsnew | past | comments | ask | show | jobs | submit | sdevonoes's commentslogin

Solution is to normalise that using LLMs is not cool anymore

There are 2 bands: you let people earn a living or you let investors/executives become richer every year to the detriment of workers. I don’t care about the medium, Im not with the big fishes

And your parents must be proud of you. You’re just another cog

Stop it. You are just making ceos like coinbase’s richer and right.

Im your CEO. I see you and the rest of your peers have doubled your productivity in the last 2 months because of claude. Good job! Now since we don’t really need to go that faster, ill fire half of you so I and my investors friends can make more profit.

Now of course, you may think you are such a good engineer that companies will kill for you… perhaps that’s true now, but its not true for 90% of the engineers out there. And as the pool of engineers gets reduced, the chances of you being not as good as you thought go up. So the real question is: can you (we all) still make a good living by not using llms. You know support each other and fuck the higher ups? No, we cant. Wwe are full of ourselves, full of elitism (this is HN). We are rational folks, we believe in numbers, in data; we know what we deserve. fuck the rest. The ones who win are the higher ups, of course, not us.


I understand and share your concerns but (without thinking I'm such a good engineer that companies will kill for me), I just don't share your conclusion.

To me, it's pretty simple. I have things to do. This makes it easier for me to do those things. Sometimes that means I can do more things, and sometimes it means I can spend less time on my work, and often both.

I have no idea what the future will hold. But to me, it would be very odd to avoid using extremely useful tools for my current work, because of that uncertainty about the future.


That’s fine. Some people cannot (don’t want to) think about the more profound consequences of their actions. No one likes to stop for 10min and think deep about what they are actually doing. The easiest path is always to stay in “robot mode”: my boss pays me $ for my job… therefore I need to satisfy that contract. No time to think”

No, see, this is the disconnect. Whatever happens with this in the future is not due to "the more profound consequences of [my] actions". Whether I choose to take advantage of these useful tools, or not, has absolutely no bearing on the hypothetical future consequences you're suggesting may come to pass.

If you're proposing an organized boycott, I would certainly entertain that proposal. But for me, the bar would be high for both certainty that the hypothetical consequences are likely and bad and that the boycott would have a chance of being effective.

At this particular moment, I'm pretty skeptical on both counts. And I'm flatly against the kind of vibes and guilt tripping driven "boycotts" that you're attempting here.

(And I'm way more bullish on the normal legislative and regulatory processes. I think organized boycotts are something to think about if those processes fail.)


> Good job! Now since we don’t really need to go that faster, ill fire half of you so I and my investors friends can make more profit.

Is this a thing? Are there companies out there that don't want to go faster?


The market can sneeze and suddenly there's a wave of hiring freezes, sounds plausible to me.

In reality, I think it's more likely that the lay-offs will be when the marginal rate of growth slows down. Once executives see that growth doesn't change much when hiring, they stop hiring, and once they see that growth doesn't decrease much when firing, they start firing.

There's still an opportunity for engineers to eat their bosses lunch and just start their own company. It's never been easier to start a lower cost competitor.

Employment isn't a social law of nature: it's a transaction of money for "units of work", just like the business might have with other vendors. Governments should be making it easier to become a vendor.


So now your competitors go twice as fast as you. Good luck with that.

As long as AI is being introduced by multibillion dollar corporations, it’s all a trick, a scam. They are just looking for increasing their valuation. A waste of time

+100, companies certainly have direct interest in pumping asset evaluation, and emotional attachment is financial valuable thing. Emotional attachment sells better than xxx this days

Didn’t get the “scary” part. I also keep my entertainment to the minimum dependencies possible. I try to rely on stuff I own: music cds, iso videogames + emulators, physical books or ebooks (thanks Anna), exercise outdoors… ditching streaming like netflix/youtube, buying crap on amazon, uber, etc

Scary = “if I have no future prospects for work”

It’s the combination of AI changing the workplace, the large techs shedding double digit headcount, recruiting / hiring departments being so broken by the AI arms race hitting job applications, and the macro business environment generally being on the downward slope at the moment.


Scary part is not having a job right now that's all. It's not scary walking around getting more vitamin d

Why TS? The npm ecosystem is insane and insecure. Not a chance we are running this in our machines.

Go/Rust way better choices. Besides, if it’s all vibe coded, it shouldn’t matter for the author


I think the TypeScript ecosystem is more suitable for this.

I do not think Rust is a bad language. But the agent ecosystem changes very quickly, and in Rust, assembling and reshaping agent workflows is difficult.

Many people prefer Rust, and I understand why. It is a genuinely excellent language, and “Rust is a great language” is a strong message that attracts many developers. But as long as lifetimes exist, I think it will remain difficult.

The lifetime system assumes, in some sense, that humans can fully predict the lifecycle of values and resources. I am not sure that is truly possible in all domains. I am also not sure whether that model is linguistically suitable for the agent ecosystem.

In agent systems, requirements change constantly. Tools change, workflows change, providers change, schemas change, and failure policies change. In that kind of environment, I am not sure Rust is the right fit.

I like Rust a lot, and it is a language I genuinely want to learn. But I am not sure that applying Rust to everything is really the right answer.

I think Rust makes a lot of sense in relatively stable infrastructure ecosystems: operating systems, runtimes, sandboxes, and core low-level layers. But agent code usually requires high-level abstraction and rapid workflow composition. Doing that in Rust takes a tremendous amount of time.


Why do agent systems change more than other things? Maybe while were here: What even is an agent system anyway? Does one work on agent systems as the final product, or is the agent system what you work with to make something else?

The definition of “agent” has changed quite a bit, even in ACL papers and other academic work.

Looking at recent examples, the practical boundary seems to be whether an LLM uses tools. In some 2023 papers, certain pipeline-based systems were still referred to as agents. More recently, the term seems to mean something looser but more action-oriented: a system that understands a goal, uses tool calls, selects actions, and executes them.

In other words, there is still no fully settled engineering definition of what an agent is. I am not an expert or a graduate student; I mostly work as a subcontractor who gets hired by university professors to reproduce specific paper metrics.

In general, every system changes frequently in its early stage. Agent systems are no different. The workflows keep changing because the field does not yet have stable, openly accepted standards for AI development.

That is also why Claude, Codex, and others are fighting to define the standard. I think the term "harness," which Anthropic has been popularizing recently, is part of the same trend. By harness I mean the execution layer around the model call itself: context management, tool dispatch, retry and fallback policies, eval loops. That layer is still actively shifting. The naming is not settled, the responsibilities are not settled, and the boundaries between the harness and the model are not settled either. Each provider is drawing those lines a little differently right now.

So my view is this: agent systems change frequently because the definition differs from person to person, the field keeps updating rapidly, and there is no engineering standard that has been firmly established yet.

Even the I/O standard itself is not really settled.


They also mentioned Go.

Not ready for production yet but ive been working on https://wingman.actor for quite a while. Its a golang based portable agent runtime with minimal dependencies.

I am using TS sandboxed in deno for all our agent code generated from a UI builder (inspired by OpenAI's own agent builder, and spits out the same code output)

You are wrong. We should ditch walled gardens like twitter/facebook/ig

It’s not about them being smart or not. It’s about giving anthropic/openai/google the power to handle our future. Haven’t we learned anything about tech giants so far?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: