Hacker Newsnew | past | comments | ask | show | jobs | submit | willmadden's commentslogin

Build a new feature. If you aren't bogged down in bureaucracy it will happen much faster.


I dont use LLMs much. When I do, the experience always feels like search 2.0. Information at your fingertips. But you need to know exactly what you're looking for to get exactly what you need. The more complicated the problem, the more fractal / divergent outcomes there are. (Im forming the opinion that this is going to be the real limitations of LLMs).

I recently used copilot.com to help solve a tricky problem for me (which uses GPT 5.1):

   I have an arbitrary width rectangle that needs to be broken into smaller 
   random width rectangles (maintaining depth) within a given min/max range. 
The first solution merged the remainder (if less than min) into the last rectangle created (regardless if it exceeded the max).

So I poked the machine.

The next result used dynamic programming and generated every possible output combination. With a sufficiently large (yet small) rectangle, this is a factorial explosion and stalled the software.

So I poked the machine.

I realized this problem was essentially finding the distinct multisets of numbers that sum to some value. The next result used dynamic programming and only calculated the distinct sets (order is ignored). That way I could choose a random width from the set and then remove that value. (The LLM did not suggest this). However, even this was slow with a large enough rectangle.

So I poked my brain.

I realized I could start off with a greedy solution: Choose a random width within range, subtract from remaining width. Once remaining width is small enough, use dynamic programming. Then I had to handle the edges cases (no sets, when it's okay to break the rules.. etc)

So the LLMs are useful, but this took 2-3 hours IIRC (thinking, implementation, testing in an environment). Pretty sure I would have landed on a solution within the same time frame. Probably greedy with back tracking to force-fit the output.


I just tested this with Claude Code and Opus 4.6, with the following prompt:

"I have an arbitrary width rectangle that needs to be broken into smaller random width rectangles (maintaining depth) within a given min/max range. The solution needs to be highly performant from an algorithmic standpoint, well-tested using TDD and Red/Green testing, written in python, and not have any subtle errors."

It got the answer you ended up with (if I'm understanding you correctly) the first time in just over 2 minutes of working, and included a solid test suite examining edge cases and with input validation.


How can we verify if you dont post the code?

I appreciate you testing, even though it's not a great comparison:

- My feedback cycle of LLM prompting forced me to be more explicit with each call, which benefited your prompt since I gave you exactly what to look for with fewer nuances.

- Maybe GPT 5.1 is old or kneecapped for newer versions of GPT

- Maybe Opus/Claud is just a way better model :P

Please post the code!

Edit: Regarding "exactly what to look for", when solving a new problem, rarely is all the nuance available for the first iteration.


I didn't prompt anything odd, just standard prompt "etiquette", actually I significantly prompted less than I would usually do, trying to do a simple prompt like you did.


I’ll believe it when I see the code


> I don't use LLMs much

Sorry to be so blunt, but it's not surprising that you aren't able to get much value from these tools, considering you don't use them much.

Getting value from LLMs / agents is a skill like any other. If you don't practice it deliberately, you will likely be bad at it. It would be a mistake to confuse lack of personal skill for lack of tool capability. But I see people make this mistake all the time.


Would be helpful if you pointed out what I did wrong :).

If it's "you didn't explain the problem clearly enough", then that aligns with my original comment.


If you ask the chatbot for best practices it will tell you, including that you don't use a chatbot.


Most of these are new features, but then they have to integrate with the existing software so it's not really greenfield. (Not to mention that our clients aren't getting any faster at approving new features, either.)


Did you train a self-hosted/open source LLM on your existing software and documentation? That should make it far more useful. It's not claude code, but some of those models are 80% there. In 6 months they'll be today's claude code.


What would that help us with?


The LLM needs to understand your existing codebase if it's going to be useful building features that integrate with said codebase seamlessly without breaking things or assuming things that don't exist. That's not something you want to give away to a private AI company, so self-host an open source model.


I understood how that would help the LLM, I asked how that would help us. What decision would that inform for us that we're currently having trouble with?


What "decision" are you talking about? You said you don't code, but you administer and maintain code. It sounds like you integrate new features into an existing codebase?


Its this kind of thinking that tells me people cant be trusted with their comments on here re. "Omg I can produce code faster and it'll do this and that".

No simply 'producing a feature' aint it bud. That's one piece of the puzzle.


What they sold isn't a security regulated by the SEC. There is no "insider trading". There aren't enough facts to determine if what his sons did was ethical and/or legal.


Floor safes do better than above-ground safes.


AI is useful for researching things far more quickly before making a decision and for automation/robotics. Motivated people don't need a nagbot to replace their calendar.


What are you talking about? They are both private companies. They don't have public financial reporting.


            *
           ***
            *
          /***\
         /*@***\
        /*******\
       /**@******\
      /***********\
     /**********@**\
    /***************\
   /*****@***********\
  /*******************\
 /***************@*****\
           |||
           |||
           |||

 ===[ MERRY * CHRISTMAS ]===


Terrible headline. The unredacted data was included in the file. That's not a hack.


We'll know when all of the old Bitcoin P2PK addresses and transacted from addresses are swept.


the funny thing is that nobody will ever do that. The moment someone uses quantum computing or any other technology to crack bitcoin in a visible way, the coins they just gave to themselves become worthless because confidence collapses.


Well, they wouldn't go for the trillion dollar wale addresses.

They would hack random, long unused, dead addresses holding 5 figure amounts and slowly convert those to money. They would eventually start to significantly lower the value and eventually crash bitcoin if too greedy, but could get filthy rich.


There are some Bitcoin puzzles or old wallets that give some plausible deniability.


It's self-inflicted. The big studios are beyond terrible. The games are more about social conditioning than entertainment.

See Clair Obscur. They got funding from the State of France and the French National Centre for Cinema, and the game is 100x better than the slop the big studios publish.


Your mention of "social conditioning" a propos of nothing gives you away like the "three fingers" in Inglorious Basterds. I would highly suggest not basing your entire personality and opinions on a slack-jawed streamer's meandering rant delivered from his RGB gaming chair for 5 hours straight.



Jesus fucking christ


Social conditioning? Clair Obscur is good and was very unique, but is not 100x than “the slop big studios publish”.


Infinity times better coukd be a better hyperbole, very goos game versus no good. Dividing by zero gets you infinity instead of just 100


"Let's just all make Clair Obscur/Minecraft/Blue Prince" is not a repeatable strategy (every indie dev is trying to make good games). How much did it cost to make the Beatles' albums? A piano, drums, a couple of guitars and salaries for 4 guys? Why don't the big studios today with all their money just hire another Beatles?

Same reason why Ubisoft isn't just making another Balatro. Industrializing culture isn't (yet?) a solved problem.


> How much did it cost to make the Beatles' albums? A piano, drums, a couple of guitars and salaries for 4 guys?

The Beatles did only take a few days to knock out each of their earliest LPs. However, per Wikipedia, "the group spent 700 hours on [Sgt. Pepper's Lonely Hearts Club Band]. The final cost [...] was approximately £25,000 (equivalent to £573,000 in 2023)."

So, actually, envelope-pushing cultural landmarks typically do require a lot of effort and money to complete.


On the other hand I'm kind of shocked that the big gaming studios never seem to be fast followers. It feels like we've been through multiple waves of Balatro-likes from indie developers already. Where is the Ubisoft Lethal Company or something? You'd think having a studio full of experienced developers with tons of tech they could hop on trends quickly. It seems like they think it's beneath them or something though. Or maybe they're just structurally incapable of moving quickly. It did take 11 years and like 4 redesigns to make Skull & Bones after all.


This is a conjencture, even if I do work in the industry but not AAA, but: Following the trends simply isn't part of their business model. Following current trends is a very unpredictable business. Many try, and many fail. AAA had the luxury of somewhat predictable sales. They can make big bets like working years on a game, since they know they will have millions of players. And they know smaller studios can't compete with them in that business.

But, of course, making games is hard, and sometimes they fail. And now the free tools are getting really good, and smaller studios are becoming increasingly competent. Will we soon see the big ones fall? Their only way to survive is to keep going bigger, escaping the smaller studios to a place they can't reach. Now we have AAAA games. But is there a limit where players stop caring how many As a game has?


The more people you add the slower you get, not faster. Large companies are nutorously slow moving (and particularly slow to change directions) vs small upstarts.


yeah, but at this point it's weird they just don't grab a studio, give them a funding for 2 years and say them 'copy the latest indie trends with a tad more polish' and let them cook to see what comes out.


They tried that, e.g. "EA Originals"[0] is basically that (there are similar programs at other major publishers). I suspect it proved to not be a big money maker at the scale required to move the needle at publishers of that size., and that they are keeping these on as a sort of prestige programs.

[0] https://en.wikipedia.org/wiki/Electronic_Arts#EA_Originals


>Industrializing culture isn’t a solved problem

Someone’s never heard of the American music industry


Correct, a better description.


Yes, it is 100x better than the slop the big studios publish.

Enter: parasitic storytelling https://www.youtube.com/watch?v=gFxu3Q71NvE


Issues like what? Civilization? Ending slavery? Those aren't teachings, they are genocidal lies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: