Hacker Newsnew | past | comments | ask | show | jobs | submit | Dlanv's commentslogin

With above-average human reflexes, the kid would have been hit at 14mph instead of 6mph.

About 5x more kinetic energy.


Yeah, if a human made the same mistakes as the Waymo driving too fast near the school, then they would have hurt the kid much worse than the Waymo did.

So if we're going to have cars drive irresponsibly fast near schools, it's better that they be piloted by robots.

But there may be a better solution...


But would a human be driving at 17 in a school zone during drop off hours? Id argue a human may be slower exactly because of this scenario


> would a human be driving at 17 in a school zone during drop off hours?

In my experience in California, always and yes.


Maybe we should not only replace the unsafe humans with robots, but also have the robots drive in a safe manner near schools rather than replicating the unsafe human behavior?


One argument for the robots is that they can be programmed to drive safer, while humans cant.

But that depends on reliability, especially in unforseen (and untrained-upon) circumstances. We'll have to see how they do, but they have been doing better than expected


Depends on the school zone. The tech school near me is in a 50 zone and they don't even turn on the "20 when flashing" signs because if you're gonna walk there, you're gonna come in via residential side streets in the back and the school itself is way back off the road. The other school near me is downtown and you wouldn't be able to go 17 even if you wanted to.


Kinetic energy is a bad metric. Acceleration is what splats people.

Jumping out of a plane wearing a parachute vs jumping off a building without one.

But acceleration is hard to calculate without knowing time or distance (assuming it's even linear) and you don't get that exponent over velocity yielding you a big number that's great for heartstring grabbing and appealing to emotion hence why nobody ever uses it.


Basically Waymo just prevented a kids potential death.

Bad any other car been there, probably including Tesla, the poor kid would have been hit with 4-10x more force.


> any other car been there, probably including Tesla

Cheap shots. If this was Tesla there would be live media coverage across every news outlet around the world and congressmen racing to start investigation.

Look at any thread where Tesla is mentioned and how many waymo simps are mansplaning lidar.


You just invented a hypothetical situation in your head then drew conclusions from it. In my version, the other car misses the kid entirely.


Yeah, but Tesla has a proven bad safety record. Waymo doesn't and the GP comment is alluding to that


Evidence (preferably with recent Teslas/HW4)?


Joined the wait-list. Can't wait.

AI already saved me from having an unnecessary surgery by recommending various modern-medicine (not alternative medicine) alternatives which ended up being effective.

Between genetics, blood stool and urine tests, scans (ultrasound, MRI, x-ray, etc), medical history... Doctors don't have time for a patient with non trivial or non obvious issues. AI has the time.


To me those are all either inferior to working on AI or can be done outside of work. And for the founder of Google, probably Google is the best place to work on AI.


Why does that matter? You can still do that. Nothing is stopping you from finding a local cleaner and negotiating the price, like our parents did.

People just don't want to do that


No, I can't do it that way anymore: my local paper doesn't have classified ads anymore. There are only different online versions, which are a lot cheaper and globally accessible, thus have a lot more fraud.


You get it. A couple phrases I live by (taught to me by the haggling parents generation); "you never know unless you ask" and "the worst they can say is, NO" These don't need to just apply to goods and services either. They have lead to very interesting and life altering experiences that wouldn't have happened if I didn't ask a one sentence question.


Everyone I know who uses Claude does not use it through openrouter.


This is definitely not true and easily observed to be false if you live in the area, then take into account waymo is active in far more areas


Claude cli has this, called learning mode, and you can make custom modes to tweak it more


Can you describe how to access this feature? I can't find anything about this online, and I asked Claude on command line and nothing comes up for "learning mode"

Seems very cool.


update: perhaps it is the verbose flag, e.g. `claude --verbose` you were talking about?

I like this a lot...


Ah, cool. Is this meant to be used for learning specifically, or just something that can be toggled whenever you're using Claude to help you with anything?


https://openai.com/index/chatgpt-study-mode/

Openai released study mode. I don't think it's anything special beyond a custom prompt that tells it to act as a teacher. But it's a good example of what these bots can do if you prompt them right.

The bots as they stand seem to be sycophantic and make a lot of assumptions (hallucinations) rather than asking for more clarification or instruction. This isn't really a core truth to bot behaviour, and is more based around adhering to American social norms for corporate communication - deference to authority etc. You can prompt the bots to behave more usefully for coding - one of my tricks is to tell the bot to ask me clarifying questions before writing any code. This prevents it from making assumptions around functionality that I haven't specified in the brief.

For non-coding use cases, I like exploring ideas with the bot, and then every now and then prompting it to steel-man the opposing view to make sure I'm not getting dragged down a rabbit hole. Then I can take the best of these ideas and form a conclusion - hegelian dialectics and all that.


It's more interesting and controversial to hear about


it has some limitations: https://old.million.dev/docs/manual-mode/block#breaking-rule... and it isn't a silver bullet on its own


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: