Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These tools also require the developer class to that they are intended to replace to continue to do what they currently do (create the knowledge source to train the AI on). It's not like the AIs are going to be creating the accessible knowledge bases to train AIs on, especially for new language extensions/libraries/etc. This is a one and f'd development. It will give a one time gain and then companies will be shocked when it falls apart and there's no developers trained up (because they all had to switch careers) to replace them. Unless Google's expectation is that all languages/development/libraries will just be static going forward.


One of my concerns is that AI may actually slow innovation in software development (tooling, languages, protocols, frameworks and libraries), because the opportunity cost of adopting them will increase, if AI remains unable to be taught new knowledge quickly.


It also bugs me that these tools will reduce the incentive to write better frameworks and language features if all the horrible boilerplate is just written by an LLM for us rather than finding ways to design systems which don't need it.

The idea that our current languages might be as far as we get is absolutely demoralising. I don't want a tool to help me write pointless boilerplate in a bad language, I want a better language.


This is my main concern. What's the point of other tools when none of the LLMs have been trained on it and you need to deliver yesterday?

It's an insanely conservative tool


You already see this if you use a language outside of Python, JS or SQL.


that is solved via larger contexts


It’s not, unless contexts get as large as comparable training materials. And you’d have to compile adequate materials. Clearly, just adding some documentation about $tool will not have the same effect as adding all the gigabytes of internet discussion and open source code regarding $tool that the model would otherwise have been trained on. This is similar to handing someone documentation and immediately asking questions about the tool, compared to asking someone who had years of experience with the tool.

Lastly, it’s also a huge waste of energy to feed the same information over and over again for each query.


- context of millions of tokens is frontier

- context over training is like someone referencing docs vs vaguely recalling from decayed memory

- context caching


You’re assuming that everything can be easily known from documentation. That’s far from the truth. A lot of what LLMs produce is informed by having been trained on large amounts of source code and large amounts of discussions where people have shared their knowledge from experience, which you can’t get from the documentation.


Yea, I'm thinking along the same lines.

The companies valuing the expensive talent currently working on Google will be the winner.

Google and others are betting big right now, but I feel the winner might be those who watches how it unfolds first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: