Hacker Newsnew | past | comments | ask | show | jobs | submit | tossandthrow's commentslogin

Your example is not from a Jr developer but from a free agent.

I think you will find it very hard to keep a Jr dev in a Corp responsible.

I actually think you will find that it is easier to work with agents at a higher quality and lower legal risk than using Jr developers.

And this is only going to be amplified when it becomes common knowledge that Ai poses less risk to projects, than Jr staff.


I think this is under appreciated. I also had my stint (some years) of freelance and found that my general take home pay was too low.

That said. When staying in a job skill atrophy is a very real thing.

As nassim talen would say, it is less risky to be a contractor.


You are aware that using eg. Github copilot is not one shot? It will start an agentic loop.

Unnecessary nitpicking

Why?

One shoting has a very specific meaning, and agentic workflows are not it?

What is the implied meaning I should understand from them using one shot?

They might refer to the lack of humans in the loop.


You give a prompt, you get a PR. If it is ready to merge with the first attempt, that’s a one shot. The agentic loop is a detail in their context

You can argu that you will have skill atrophy by not using LLMs.

We have gone multi cloud disaster recovery on our infrastructure. Something I would not have done yet, had we not had LLMs.

I am learning at an incredible rate with LLMs.


I kind feel the same. I’m learning things and doing things in areas that would just skip due to lack of time or fear.

But I’m so much more detached of the code, I don’t feel that ‘deep neural connection’ from actual spending days in locked in a refactor or debugging a really complex issue.

I don’t know how a feel about it.


I strongly agree on the refactor, but for debugging I have another perspective: I think debugging is changing for the better, so it looks different.

Sure, you don't know the code by heart, but people debugging code translated to assembly already do that.

The big difference is being able to unleash scripts that invalidate enormous amount of hypothesis very fast and that can analyze the data.

Used to do that by hand it took hours, so it would be a last resort approach. Now that's very cheap, so validating many hypothesis is way cheaper!

I feel like my "debugging ability" in terms of value delivered has gone way up. For skill, it's changing. I cannot tell, but the value i am delivering for debugging sessions has gone way up


As someone who's switched from mobile to web dev professionally for the last 6 months now. If you care about code quality, you'll develop that neural connection after some time.

But if you don't and there's no PR process (side projects), the motivation to form that connection is quite low.


> If you care about code quality, you'll develop that neural connection after some time.

No, because you can get LLMs to produce high quality code that has gone through an infinite number of refinement/polish cycles and is far more exhaustive than the code you would have written yourself.

Once you hit that point, you find yourself in a directional/steering position divorced from the code since no matter what direction you take, you'll get high quality code.


Only if never find opportunities to simplify the code it's writing and you don't review the code at all.

> no matter what direction you take, you'll get high quality code

This is not the case today. You get medium-quality, sometimes over-engineered code 10x faster.


Yes, you certainly can argue that, but you'd be wrong. The primary selling point of LLMs is that they solve the problem of needing skill to get things done.

That is not the entire selling point - so you are very wrong.

You very much decide how you employ LLMs.

Nobody are keeping a gun to your head to use them. In a certain way.

Sonif you use them in a way that increase you inherent risk, then you are incredibly wrong.


I suggest you read the sales pitches that these products have been making. Again, when I say that this is the selling point, I mean it: This is why management is buying them.

I've read the sales pitches, and they're not about replacing the need for skill. The Claude Design announcement from yesterday (https://www.anthropic.com/news/claude-design-anthropic-labs) is pretty typical in my experience. The pitch is that this is good for designers, because it will allow them to explore a much broader range of ideas and collaborate on them with counterparties more easily. The tool will give you cool little sliders to set the city size and arc width, but it doesn't explain why you would want to adjust these parameters or how to determine the correct values; that's your job.

I understand why a designer might read this post and not be happy about it. If you don't think your management values or appreciates design skill, you'd worry they're going to glaze over the bullet points about design productivity, and jump straight to the one where PMs and marketers can build prototypes and ignore you. But that's not what the sales pitch is focused on.


The majority of examples in the document you linked describe 'person without<skill> can do thing needing <skill>'. It's very much selling 'more output, less skill'

Sales pitches dont mean jack, WTF are you talking about?

Sales pitches are literally the same thing as "the selling point".

Neither of those is necessarily a synonym for why you personally use them


They purportedly solve the problem of needing skill to get things done. IME, this is usually repeated by VC backed LLM companies or people who haven’t knowingly had to deal with other people’s bad results.

This all bumps up against the fact that most people default to “you use the tool wrong” and/or “you should only use it to do things where you already have firm grasp or at least foundational knowledge.”

It also bumps against the fact that the average person is using LLM’s as a replacement for standard google search.


I see it completely the opposite way, you use an LLM and correct all its mistakes and it allows you to deliver a rough solution very quickly and then refine it in combination with the AI but it still gets completely lost and stuck on basic things. It’s a very useful companion that you can’t trust, but it’s made me 4-5x more productive and certainly less frustrated by the legacy codebase I work on.

Yeah I whole hardheartedly disagree with this. Because I understand the basics of coding I can understand where the model gets stuck and prompt it in other directions.

If you don't know whats going on through the whole process, good luck with the end product.


You're learning at your standard rate of learning, you're just feeding yourself over-confidence on how much you're absorbing vs what the LLM is facilitating you rolling out.

This is such a weird statement in so many levels.

The latent assumption here is that learning is zero sum.

That you can take a 30 year old from 1856 bring them into present day and they will learn whatever subject as fast as a present day 20 year old.

That teachers doesn't matter.

That engagement doesn't matter.

Learning is not zero sum. Some cultural background makes learning easier, some mentoring makes is easier, and some techniques increases engagement in ways that increase learning speed.


> I am learning at an incredible rate with LLMs

Could you do it again without the help of an LLM?

If no, then can you really claim to have learned anything?


The challenge is not if you could do all of it without AI but any of it that you couldn't before.

Not everyone learns at the same pace and not everyone has the same fault tolerance threshold. In my experiencd some people are what I call "Japanese learners" perfecting by watching. They will learn with AI but would never do it themselves out of fear of getting something wrong while they understand most of it, others that I call "western learners" will start right away and "get their hands dirty" without much knowledge and also get it wrong right away. Both are valid learning strategies fitting different personalities.


I could definitely maintain the infrastructure without an llm. Albeit much slower.

And yes. If LLMs disappear, then we need to hire a lot of people to maintain the infrastructure.

Which naturally is a part of the risk modeling.


> I could definitely maintain the infrastructure without an llm

Not what I asked, but thanks for playing.


You literally asked that question

> Could you do it again without the help of an LLM?


And the question you answered was "could you maintain it without the help of an LLM"

So, you havent really learned anything from any teacher if you could not do it again without them?

> So, you havent really learned anything from any teacher if you could not do it again without them?

Well, yes?

What do you think "learning" means? If you cannot do something without the teacher, you haven't learned that thing.


That would be the definition of learning something, yes.

I mean...yeah?

If your child says they've learned their multiplication tables but they can't actually multiply any numbers you give them do they actually know how to do multiplication? I would say no.


For some reason people are perfectly able to understand this in the context of, say, cursive, calculator use, etc., but when it comes to their own skillset somehow it's going to be really different.

Yes that's exactly right.

Yes.

I think this is a bit dismissive.

It’s quite possible to be deep into solving a problem with an LLM guiding you where you’re reading and learning from what it says. This is not really that different from googling random blogs and learning from Stack Overflow.

Assuming everyone just sits there dribbling whilst Claude is in YOLO mode isn’t always correct.


>> I am learning a new skill with instructor at an incredible rate

> Could you do it again on your own?

Can you you see how nonsensical your stance is? You're straight up accusing GP of lying they are learning something at the increased rate OR suggesting if they couldn't learn that, presumably at the same rate, on they own, they're not learning anything.

That's not very wise to project your own experiences on others.


Actually, it’s much like taking a physics or engineering course, and after the class being fully able to explain the class that day, and yet realize later when you are doing the homework that you did not actually fully understand like you thought you did.

>I am learning at an incredible rate with LLMs.

I don't believe it. Having something else do the work for you is not learning, no matter how much you tell yourself it is.


If you've seen further it's only because you've stood on the shoulders of giants.

Having other people do work for you is how people get to focus on things they actually care about.

Do you use a compiler you didn't write yourself? If so can you really say you've ever learned anything about computers?


You have to build a computer to learn about computers!

I would argue that if you've just watched videos about building computers and haven't sat down and done one yourself, then yeah I don't see any evidence that you've learned how to build a computer.

And, so the anti-LLM argument goes, if you've not built the computer you can't learn anything about what computers could be used for.

That's not the anti-LLM argument, that's a brand new argument you made up.

Did you not read the comment thread you replied to? That's the exact argument that I_love_retros made above.

That is in fact the anti LLM argument you've ostensibly been discussing. If you want to talk to the person who made it up I'm not your guy.


It is easy to not believe if you only apply an incredibly narrow world view.

Open your eyes, and you might become a believer.


What is this, some sort of cult?

You mean the cult of "I can't see the viruses therefore they dint exist"? As in "I can't imagine something so it means it's a lie"?

Indeed, quite weird and no imagination.


No, it is an as snarky response to a person being snarky about usefulness of AI agents.

It does seem like there is a cult of people who categorically see LLMs as being poor at anything without it being founded in anything experience other than their 2023 afternoon to play around with it.


Who cares? Why are people so invested in trying to “convert” others to see the light?

Can’t you be satisfied with outcompeting “non believers”? What motivates you to argue on the internet about it? Deep down are you insecure about your reliance on these tools or something, and want everyone else to be as well?


Why do people invest themselves so hard in interjecting themselves into conversations about Ai telling people it doesn't work?

It feels so off rebuilding serious SaaS apps in days for production, only to be told it is not possible?


Who here said ai “doesn’t work”?

> We have gone multi cloud disaster recovery on our infrastructure. Something I would not have done yet, had we not had LLMs.

That’s product atrophy, not skill atrophy.


Using LLMs as a learning tool isn’t what causes skill atrophy. It’s using them to solve entire problems without understanding what they’ve done.

And not even just understanding, but verifying that they’ve implemented the optimal solution.


It's partly that, but also reading and surface level understanding something vs generating yourself are different skills with different depths. If you're learning a language, you can get good at listening without getting good at speaking for example.

Also AI could help you pick those skills up again faster, although you wouldn’t need to ever pick those skills up again unless AI ceased to exist.

What an interesting paradox-like situation.


I believe some professor warned us about being over reliant on Google/reddit etc: “how would you be productive if internet went down” dilemma.

Well, if internet is down, so is our revenue buddy. Engineering throughput would be the last of our concerns.


The lock in is so incredibly poor. I could switch to whatever provider in minuets.

But it requires that one does not do something stupid.

Eg. For recurring tasks: keep the task specification in the source code and just ask Claude to execute it.

The same with all documentation, etc.


I love being able to put my brain cells at lean, coq, haskell. All the fun stuff. And have my money job taken care of mostly with agents.

200USD a month really is not that much. Especially not for an employer who is used to pay 150-250k a year for an engineer.

Especially for the value it provides.


I don't think that the majority of HN users are employers who are used to pay 150k-250k a year

No, but a large proportion are likely employees of these employers

> it's just a bash script that goes through every single file in the codebase and, for each one and runs a "find the vulns here" prompt.

This really is not the case.

You have freedom of methodology.

You can also ask it to enumerate various risks and find proof of existence for each of them.

Certainly our LLM audits are not just a prompt per file - so I have a hard time believing that best in class tools would do this.


I've actually had pretty good results from doing exactly that. There was one FP when it tried to be Coverity and failed miserably, but the others were "you need to look at this bit more closely", and in most cases there was something there. Not necessarily a vuln but places where the code could have been written more clearly. It was like having your fourth grade English teacher looking over your shoulder and saying "you need to look at the grammar in this sentence more closely".

And using an LLM to audit your code isn't necessarily a case of turning it into perfect code, it's to keep ahead of the other side also using an LLM. You don't need to outrun the bear, just the other hikers.


This is Wikipedia.

For that type of publishing please use Encyclopedia Britannica.

You will get the url in the 2027 edition on print.


Oh how I wish the print editions were still being released.

If you are really interested we could try piping their [API](https://encyclopediaapi.com/products/index) to some printable format. Maybe we can even find a quality print on demand service or bind it by hand :)

There is only a market if there is a commodity.

La Liga is not a commodity as I can not equally make a La Liga.

This is the basis for antitrust regulations.

So no, there is not market. And as such there is no markets that functions as it should.


Absolutely incorrect. A market is just any structure, place, or mechanism that allows buyers and sellers to exchange goods, services, information, or assets. There can be one seller and many buyers, one buyer and many sellers, or anything in between.

If I go to your stall and coercively take a product you want 1000 tokens for, but only leave 10 tokens. Then it is still a market? It certainly fits the definition you present.

Inwould argue that price discovery is a bit part of a market. Again, these things are already codified. Eg. Wash trades, insider trading, etc.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: