Hacker Newsnew | past | comments | ask | show | jobs | submit | motoboi's commentslogin

The article interviewed some actual hydrologists from Iran. I’m pretty sure they are aware of population growth in their homeland.

"How did you go bankrupt? Two ways. Gradually, then suddenly." - Hemingway

Humans are notoriously bad heading off long term consequences.


While the romantic in me is hoping that qanats would indeed still do the job, we don’t know how hand-picked these hydrologist opinions are.

Seeing that their product keeps improving I’m actually fascinated by that level of discipline.

Total focus on the main product, which is the API.


This is a hilarious take.

Car company makes innovative new car engine for their vehicle. A user wants to get a replacement key made for the vehicle, but company doesn't have the process in place to make replacement keys:

Are you fascinated by this hypothetical companies level of discipline? Or would you consider it negligent and inept?


If the car in question were the probably most hot software in town and the user wants to change the photos on their profiles, I'd find it very interesting if they kept the discipline to focus the whole team away from such a low-priority change and into the priority of keeping it the most hot software in town.

Let's keep in mind that OpenAI is a small company (in people terms), and they are fighting toe to toe with Google.

Heck, if they mess up a quarter they are probably dead.


Besides the fact you're completely shifting the goal post here on analogies, changing email address is a pretty normal feature of any service pretending to be serious. Also, you seem to have the belief it is impossible for such a large company with such investment to work on multiple things simultaneously.

The fact that they can, but choose not to is exactly the fact I’m astonished with.

Authentication to the API platform seems like an important part of that product.

This is what move fast and break things looks like in a enterprise the size of microsoft.

It's mostly break things and little moving fast.

But the idea is that it's AI or death, so some broken buttons seems of less importances than the buttons itself being there, because the button working is a problem involving several teams, so no one is actually responsible, but the button being there is some team problem, and hell yeah they solved in the first sprint.


"Move fast and break things" is fine if you're a social networking site and breaking things means people can't get their racist memes or browse marketplace for twenty minutes until you push a change.

It's less fine if the things you're breaking are your core operating systems and the office suite that makes you most of your money and it takes you months to get the relevant teams aligned to push out a fix for the bad idea your execs pushed.


This is a problem at a wide array of tech companies. Everyone wants to Meta, everyone wants to be Google.

Guys, we're building B2B enterprise software. The most important things our clients care about is this hunk of junk working. Changing it is probably bad, actually, because the users are using it 8 hours a day and they don't want to deal with annoying popups about new features and UI churn for the sake of churn.


It should be noted that the things you are supposed to be allowed to break are YOUR things. When you start breaking MY things then we're going to have a real problem.

Reflect a moment over the fact that LLMs currently are just text generators.

Also that the conversational behavior we see it’s just examples of conversations that we have the model to mimic so when we say “System: you are a helpful assistant. User: let’s talk. Assistant:” it will complete the text in a way that mimics a conversation?.

Yeah, we improved over that using reinforcement learning to steer the text generation into paths that lead to problem solving and more “agentic” traces (“I need to open this file the user talked about to read it and then I should run bash grep over it to find the function the user cited”), but that’s just a clever way we found to let the model itself discover which text generation paths we like the most (or are more useful to us).

So to comment on your discomfort, we (humans) trained the model to spill out answers (there are thousand of human being right now writing nicely though and formatted answers to common questions so that we can train the models on that).

If we try to train the models to mimic long dances into shared meaning we will probably decrease their utility. And we won’t be able anyway to do that because then we would have to have customized text traces for each individual instead of question-answers pairs.

Downvoters: I simplified things a lot here, in name of understanding, so bear with me.


> Reflect a moment over the fact that LLMs currently are just text generators.

You could say the same thing about humans.


No, you actually can't.

Humans existed for 10s to 100s of thousands of years without text. or even words for that matter.


I disagree: it is language that makes us human.

I disagree. You're still human if you're deaf and mute. Our intellectual processing powers, or of animals for that matter, has nothing to do with language.

Being deaf and mute doesn't imply lack of language. But being unable to communicate absolutely strikes me as non-human.

Ok say you grew up alone in the woods, are you no longer human? The capability to learn language is no doubt unique, but language itself isn't the basis of intelligence.

> Ok say you grew up alone in the woods, are you no longer human?

No. You are not. You are a hairless, bipedal ape.

> but language itself isn't the basis of intelligence.

Intelligence is an illusion based in language. Without language, intelligence is meaningless


No, you cannot. Our abstract language abilities (especially the written word part) are a very thin layer on top of hundreds of millions of years of evolution in an information dense environment.

Sure, but language is the only thing that meaningfully separates us from other great apes

Not it isn't most animals also have a language and humans do way more things differently, than just speak.

> most animals also have a language

Bruh


The human world model is based on physical sensors and actions. LLMs are based on our formal text communication. Very different!

Just yesterday I observed myself acting on an external stimulus without any internal words (this happens continuously, but it is hard to notice because we usually don't pay attention to how we do things): I sat in a waiting area of a cinema. A woman walked by and dropped her scarf without noticing. I automatically without thinking raised arm and pointer finger towards her, and when I had her attention pointed behind her. I did not have time to think even a single word while that happened.

Most of what we do does not involved any words or even just "symbols", not even internally. Instead, it is a neural signal from sensors into the brain, doing some loops, directly to muscle activation. Without going through the add-on complexity of language, or even "symbols".

Our word generator is not the core of our being, it is an add-on. When we generate words it's also very far from being a direct representation of internal state. Instead, we have to meander and iterate to come up with appropriate words for an internal state we are not even quite aware of. That's why artists came up with all kinds of experiments to better represent our internal state, because people always knew the words we produce don't represent it very well.

That is also how people always get into arguments about definitions. Because the words are secondary, and the further from the center of established meaning for some word you get the more the differences show between various people. (The best option is to drop insisting of words being the center of the universe, even just the human universe, and/or to choose words that have the subject of discussion more firmly in the center of their established use).

We are text generators in some areas, I don't doubt that. Just a few months ago I listened to some guy speaking to a small rally. I am certain that not a single sentence he said was of his own making, he was just using things he had read and parroted them (as a former East German, I know enough Marx/Engels/Lenin to recognize it). I don't want to single that person out, we all have those moments, when we speak about things we don't have any experiences with. We read text, and when prompted we regurgitate a version of it. In those moments we are probably closest to LLM output. When prompted, we cannot fall back on generating fresh text from our own actual experience, instead we keep using text we heard or read, with only very superficial understanding, and as soon as an actual expert shows up we become defensive and try to change the context frame.


Without language we're just bald, bipedal chimps. Language is what makes us human.

> The human world model

Bruh this concept is insane


How do you reconcile this belief with the fact that we evolved from organisms that had no concept of text?

What is there to reconcile? Humans are not the things we evolved from.

You could, but you’d be missing a big part of the picture. Humans are also (at least) symbol manipulators.

Same thing

Life in the field, from the land, in the past, meant death from starvation.

Some unsung heroes: - the person that discovered how to fix nitrogen in the soil saved more lives than every other people in history, combined. - Norman Borlaug, father of the green revolution, saved more than 1 billion people from starvation.


Borlaug was a very important figure in global food security but he was a plant breeder, not the guy(s) who figured out how to fix nitrogen from the air into fertilizer. Nitrogen people were Haber and Bosch.

Millions of probably do owe their very existence to these men though, agree with that.

However part of me (maybe a slightly misanthropic part?) wonders if it might be a bit like feeding stray cats, and now we have a huge herd of cats that are rapidly outstripping the ultimate carrying capacity of their environment and it doesn't end well. But since I'm one of the cats, I say we just go with it and see what happens.


Im sorry. That was supposed to be a list but the formatter ate the lines.

I see makes sense. Sorry for being "the well actually" guy.

The knowledge probably is o the pre-training data (the internet documenta the LLM is trained at to get a good grasp), but probably very poorly represented in the reinforcement learning phase.

Which is to say that probably antropic don’t have good training documents and evals to teach the model how to do that.

Well they didn’t. But now they have some.

If the author want to improve his efficiency even more, I’d suggest he starts creating tools that allow a human to create a text trace of a good run on decompilating this project.

Those traces can be hosted in a place Antropic can see and then after the next model pre-training there will be a good chance the model become even better at this task.


I'm starting to form an image of the zig community: people that like to write and reason, nice typography, videogame inspired visuals.


Agree, it's funny to see how that seems to be developing organically!


How many people on the team?


I work in a small team, about five people.


I believe the author would love stg: https://stacked-git.github.io/guides/tutorial/#patches


Kubernetes is not only an orchestrator but a scheduler.

Is a way to run arbitrary processes on a bunch of servers.

But what if your processes are known beforehand? Than you don't need a scheduler, nor an orchestrator.

If it's just your web app with two containers and nothing more?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: