Hacker Newsnew | past | comments | ask | show | jobs | submit | kansface's commentslogin

> I just can't figure how _how_ to burn that much money a month responsibly.

I always have a few agents (2-5) doing research and working on plans in parallel. A plan is a thorough and unambiguous document describing the process to implement some feature. It contains goals, non-goals, data models, access patterns, explicit semantics, migrations, phasing, requirements, acceptance criteria, phased and final. Plans often require speculative work to formulate. Plans take hours to days to a couple of weeks to write. Humans may review the plans or derived RFCs. Chiefly AI reviews the code (multiple agents with differing prompts until a fixed point is reached between them). Tests and formal methods are meant to do heavy lifting.

In my highest volume weeks, I ship low hundreds of thousands of lines of software not counting changes to deps.

> At a corporate level, I'd much rather hire a junior engineer

Any formulation of problem sufficient for a truly junior engineer to execute is better given to an agent. The solution is cheaper, faster, and likely better. If the later doesn't hold, 10 independent solutions are still cheaper and faster than a junior engineer.

There is no longer any likely path to teaching a junior engineer the trade.


Just out of curiosity, what type of systems are you working on? What type of features did you implement on your 100k LOC week?

I don't know about the GP, but my workflow is similar to theirs, but I aim to ship low thousands of lines per week. The fewer the better. I even tell the agent to only write high SNR tests, otherwise it just adds useless "make sure this function returns this thing we hardcoded".

I usually succeed, BTW. I spend a lot of time planning, but usually each PR is a few hundred lines, and fairly easily reviewable.

I mostly work with Python backends, though these days it might be any language (Ruby, Go, TS).


> What type of features did you implement on your 100k LOC week?

I work on 3rd party API integrations, of which, we have hundreds, each in its own repo. We need to build thousands more at a fraction of the cost. Any given integration historically takes a human a few days up to a few months to build and is subject to ongoing maintenance. We frequently do not have access to the API and we mostly never have a representative data set if we do. Complex APIs tend to expose multiple, entwined data models. Documentation may be wrong or in a foreign language.

I've been building a new framework to do it better. Ideally, we can get an agent to spit them out in a few minutes to hours with a much reduced ops burden for managing the fleet, all with very high confidence. The later requires pushing as much into the type system as possible and leveraging static analysis. Much of the work has been embarrassingly parallelizable. Consider categorizing access patterns across the entire set or ensuring byte for byte parity (over the input space of third party API responses).

This is absolutely not a problem that a human or 2 could tackle prior to AI.


>I've been building a new framework to do it better. So you're using your software factory to build a software factory. Not building thousands of integrations at a fraction of the cost.

I am sorry, I am probably just very dumb, but this sounds extremely wasteful. If this is a reflection of how software was made before AI I wonder how anything was ever made.

> In my highest volume weeks, I ship low hundreds of thousands of lines of software not counting changes to deps.

I suspicious you actually get claude to output that much usable code in a week, but maybe you do.

But I’m 100% positive that you’re not shipping even a small fraction of the amount of value that someone reading this 2 years ago would have expected from hundreds of thousands of lines of code.


I dunno I've seen agents make boneheaded mistakes even a junior engineer wouldn't make. Treating them as strictly better than junior engineers is a problem, not just for that reason but because you're effectively killing the pipline for senior engineers. Then what?

> I dunno I've seen agents make boneheaded mistakes even a junior engineer wouldn't make.

Yes, of course.

> you're effectively killing the pipline for senior engineers. Then what?

I honestly don't know _what_. Its a prisoner's dilemma.


You will burn yourself out in months at that level of daily context switching.

It isn't worth it.


> In my highest volume weeks, I ship low hundreds of thousands of lines of software not counting changes to deps

But what do they actually do?

I keep seeing people wax poetic about the mountains and mountains of code that LLMs are dumping out but I'm yet to anywhere near a proportionate amount of actually useful new apps or features. And if anything the useful ones I do find are just more shovels for more AI. When do we get to the part where we start seeing the 10x gains from the billions of lines of code that have probably been generated at this point?


> If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

Not a lawyer, but that reads as compelled speech to me. Materially misrepresenting an image would be libel, today, right?


Well, considering that AI generated content can't be copyrighted (afaik at least), I think we're in very different legal territory when it comes to AI creating things. While it's true that deepfakes could be considered libel.. good luck prosecuting that if you can't even figure out where the image came from.

The problem is it's all too easy to generate - you can't really do much about an individual piece of slop because there's so much of it. I think we need a way to filter this stuff, societally.


Just because he hasn’t pulled the trigger doesn’t mean there isn’t an actual red line.


"So what is the last resort? Piccadilly?"

https://www.youtube.com/watch?v=QgkUVIj3KWY


The red line is an invasion of Moscow or a strike on Russian nuclear capabilities.

Everything else is just an order for preemptive suicide.


Ukraine already executed a successful strike on Russian nuclear capabilities. In 2025 Operation “Spider’s Web” completely destroyed several irreplaceable nuclear-capable strategic bombers.

https://www.csis.org/analysis/how-ukraines-spider-web-operat...

There is no "red line".


that didn't cross the red line, and the US has a phone call to talk it over and that Russia's policy for using nukes hadnt been breached



The red line attack on Russia's nuclear capabilities has to present a credible and deliberate threat to Russia's nuclear strike capacity.

Operation spider web was attack on Russian airfields primarily to reduce capacity to bomb Ukraine.

You can't nuke a country for hitting some nuclear strike capabilities you're using to conventionally bomb them.


This feels pretty fertile atm to me, because it has been prohibitively expensive to do. I expect there is a ton of low hanging fruit. Why not in the age of AI?


This is bad epistemology. Incentives change the behavior on the edges.


It is a reimagining of Pascal’s Wager. On the original front, I don’t see the neo-Rationalists converting to Christianity en masse.


Pascal's wager is an argument that even if the probability of God's existence is very small, it is still rational to believe in God and live accordingly. Yudkowsky is the author of a blog post titled "Pascal's mugging", which likewise involves a small probability of an extremely bad outcome, but that blog post is completely silent about the dangerousness of AI research. (The post points out a paradox in decision theory, i.e., the theory that flows from the equation expected_utility = summation over every possible outcome O of U(O) * P(O).)

No one to my knowledge has ever argued that AI research should be prohibited because of a very small probability of its turning out extremely badly. This is entirely a straw man set up by people who want AI research to continue. Yudkowsky argues that if AI research is allowed to continue, then the natural expected outcome will be very bad (probably human extinction, but more exotic terrible outcomes are also possible) [1]. There are others who argue that no team or organization anywhere should engage in any program of development that has a 10% or more chance of ending the human race without there first being an extensive public debate followed by a vote in which everyone can participate, and this is their objection to any continuance of AI research.

[1] But don't take my word for it: here is Yudkowsky writing in Apr 2022 in https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/: "When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%. That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained."


You will improve what you practice, possibly. This is advice for learning to play by ear. I was given the exact opposite advice by my classical guitar teacher in college because I was playing one thing and hearing something else. Sometimes, practice makes you worse or is a waste of time at best. If I could give better advice, it would be to be brutally mindful of what you are playing. Record it, and hear what is there. If it isn’t painful, you probably aren’t practicing.


I've generated 250KLoC this week, absolutely no changes in deps or any other shenanigans. I'm not even really trying to optimize my output. I work on plans/proposals with 2 or 3 agents simultaneously in Cursor while one does work, sometimes parallelized. I can't do that in less code and cleaner. I can't do it at all. Don't wait too long.


> I've generated 250KLoC this week

It's horrifying, all right, but not in the way you think lol. If you don't understand why this isn't a brag, then my job is very safe.


If managers can't understand why this isn't a brag, then your job is hardly safe.


Agents are new, but dumb coding practices are not. Despite what it may seem, the knowledge of how to manage development has increased. One practice I haven't seen for a while is managing by limiting the number of lines changed. (This was a dumb idea because rewriting a function or module is sometimes the only way to keep it comprehensible - I'm not talking about wholesale rewriting, I'm talking about code becoming encrusted with conditions because the devs are worried that changing the structure would change too many lines)


Managers jobs are more at risk than senior engineers.

My company, and few others I know reduced the number of managers by 90% or more.


This. We moved to 'agile' roughly at same time llms started coming. You know who is mostly missing from the wider IT teams landscape now? Most of PMs and BAs, on purpose. Who is keeping their work - devs or more like devops, surprisingly testers (since our stuff seems ridiculously complex to test well for many use cases and we can't just break our bank by destroying production by fast half-assed deliveries).

And then admins/unix/network/windows/support team, nobody replacing those anytime soon. Those few PMs left are there only for big efforts requiring organizing many teams, and we rarely do those now.

I don't like it, chasing people, babysitting processes and so on, but certainly just more work for me.


you're not responding to op's point, unless you're insinuating there won't be any managers ever, which won't happen. As long as there is one manager left, OP's point remains.


if the industry got rid of 98% of the managers - the industry's output would go up by 500x


Tell me you're not a SWE without telling me. Engineers hire engineers at tech companies. Managers do the hiring paperwork.


Can you please stop posting so abrasively? It's not that this one comment is so bad, but you have a long pattern of doing so, and we're trying for something else here.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Less is more, its not a hard thing to understand. These companies are accumulating record level tech debt.


I use these tools a lot, and one thing that has stood out to me is that they LOVE to write code. A lot of it. And they're not super great at extracting the reusable parts. They also love to over-engineer things.

I've taken great pains to get by with as little code as possible, because the more code you have, the harder it is to change (obviously). And while there are absolutely instances in which I'm not super invested in a piece of code's ability to be changed, there are definitely instances in which I am.


Yea, by my less is more logic its sometimes also difficult to do. With that approach people try to become clever and write shorter code thats unmaintainable due to mental gymnastics other people have to go through when reading it. What LLMs are doing is probably going for some kind of overengineered "best practice" solution. I personally only use them for simple Laravel CRUD apps and they are admittedly pretty good at that.


> I can't do that in less code and cleaner. I can't do it at all.

Can't do what, precisely?


LOL that's it? I generated over 5 million lines of code this week. You need to step it up or you're gonna be left behind.


You mean, you don't have

  yes \; >> main.c
running in the background 24/7?


I left an agent generating code over the weekend, so that I can get to 15 million.

What code? Code!


Your developers were so preoccupied with whether or not they could, they didn't stop to think if they should (add 250kloc)


I've worked with a type of (anti?) developer in my career that seems to only be able to add code. Never change or take it away. It's bizarre. There's some calculation bug, then a few lines down some code which corrects it instead of just fixing the original lines.

It's bizzare, and as horrible as you might imagine.

And it's been more than one or two people I've seen do this.


Now they have agents.

People need to understand that code is a liability. LLMs hasn't changed that at all. You LLM will get every bit as confused when you have a bug somewhere in the backend and you then work around it with another line of code in the front end. line of code


They can always generate a new backend prototype from scratch.


This sounds like some kind of learned risk aversion, like they don’t want to assume the responsibility of altering whats already there.


> I don't know. The World Happiness Report relies on one simple question, which is easy to criticise but at least it applies a clear and consistent method.

The simplicity is nice, but for the (probable) fact that suicide attempts/rates and emigration don't correspond... so lets not call it happiness.


People's time is conserved, so a couple of questions: 1. What percentage of decline can be attributed to social media purely as a time sink? 2. What percentage of decline can be attributed to increased political polarization encroaching/claiming/colonizing formerly and nominally neutral spaces?

One remarkable counter example in my neck of the woods is the Orthodox Church, which has done extraordinarily well since covid, picking up tons of converts. Of course, people themselves are conserved, too. That growth has come at the expense of protestant churches which in my reckoning sorta stopped being churches during covid. I'd estimate 1/3 of my local congregation is non-Greek converts who seemingly have no intention of learning the language (services regularly run 1.5 to 2 hours, largely in koine Greek)!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: