Hacker Newsnew | past | comments | ask | show | jobs | submit | beering's commentslogin

Or rather, it’s hard to ask everyone to side-by-side compare both products on their use cases. So the choice really comes down to word-of-mouth even though their use cases may be better served by Codex.

Codex also gives you a lot more usage for $20/mon than Claude, so there’s not also that fear that high or xhigh reasoning will eat up all your quota. It really comes down to whether you want to try to save some time or not. (I default to xhigh because it’s still fast enough for me.)

people have been complaining about this since GPT-4 and have never been able to provide any evidence (even though they have all their old conversations in their chat history). I think it’s simply new model shininess turning into raised expectations after some amount of time.

I would have thought so too. But my n=1 has CC solving pretty much the same task today and about two weeks ago with drastically degraded results.

The background being that we scrapped working on a feature and then started again a sprint later.

In my cynicism I find it more likely that a massively unprofitable LLM company tries to reduce costs at any price than everyone else suffering from a collective delusion.


I agree with you. I too complain about this same phenomenon with my colleagues, and we always arrive at the same conclusion: it’s probably us just expecting more and more over time.

Word on the street is that Opus is much much larger of a model than GPT-5.4 and that’s why the rate limits on Codex are so much more generous. But I guess you could also just switch to Sonnet or Haiku in Claude Code?

Claude Code is not a platform and you’re not meant to be building on it. Netflix is also not a platform and you shouldn’t be running code (open source or not) to mass download Netflix movies either.

It's a reasonable comment, and I should be clear, I don't expect it to be a platform. But I do expect to be able to use its advertised features for any reasonable purpose they can support.

Where it leaves me is is sort of like the DoD - nobody should use Claude for anything. Because Anthropic has set as principle here that if they don't like what you do, they will interfere with your usage. There is no principle to guide you on what they might not like and therefore ban next. So you can't do anything you want to be able to rely on. If you need to rely on it, don't use Claude Code.

And to be clear, I'm not arguing at all against using their API per-token billed services.


That rather hinges on exactly what "can support" means. Can they support use cases which work individually but would exceed their capacity to provide if done by a large number of their subscribers?


Is using claude -p supposed to be dangerous? Could someone be confused as openclaw or other things?

If yes, why do Anthropic provide this cli flag?


They have so much mindshare right now that they can’t lose, and the number of users that use opencode and would be affected is miniscule—-on the level of complaining about your online bank not supporting Konqueror.

Mainly OpenSCAD is not a BRep modeling tool! It is not on the same level of power as CAD tools with a BRep kernel and this especially shows when you want to do a fillet over an arbitrary edge. Unfortunately these kernels are hard to make and integrate and I only know of two open-source BRep kernels out there: OpenCASCADE (used by FreeCAD and build123) and truck (not sure what the status of it is).

So are you able to get free inference now that you decrypted this?

It doesn't look like it in the full sense of "free". But part of how one pays these services is by running a permissive modern browser which allows the corporation to spy on you even when you already paid in currency. In a sense by depriving them of the ability to easily spy on your this workaround is closer to "free".

>My best guess is -- ChatGPT is running something in your browser to try to determine the best things to send down to the model API

There's no way this is worth it unless the models are absolutely tiny, in which case any benefits from offloading to the client is marginal and probably isn't worth the engineering effort.


It’s free as a loss leader. The trick is to upsell later. Unfortunately for OpenAI there are plenty of competitors with fungible products, so it might be hard to pull a classic monopoly rug-pull.

They already see everything I’m doing because I send my prompts to them. What “workaround” are you referring to?

They see everything your doing because you send the text. But this is talking about everything about your computer system. You would not normally be sending this to them or having it involved at all. This workaround allows you to not involve unneeded information about your computer setup. It is not about avoiding sending prompt text.

And as for "but chatgpt isn't paid" (another commenter), well, then yes, that's even closer to free by removing this spying on your computer setup. But they spy on the paid users too.


But isn't ChatGPT access free through the browser? What do you mean already paid in currency?

If you want to send more than a few prompts each day, you have to pay. With currency.

This is 100% crank “science” that has wrapped up a banal finding in big words and LaTeX. The claim is roughly as exciting as, “some computer programs print nothing to stdout.”

The output shown is not “null” or “void”. It is the empty string, which these LLMs are perfectly capable of outputting. Technically, it outputs the stop token, analogous to \0 at the end of a C string.


It looks like jargon and LaTeX are the future of spam given how readily pepper treat anything framed in them as legitimate.

Can’t wait to start getting spam emails about exogenous upregulation of androgen receptor density within the penile vasculature to potentiate trophic tissue remodeling from penus-enlargement.biz


Finally someone called it out, it's crazy that such posts get to the HN front page. Zenodo gets so much AI slop spam nowadays.


Doesn’t basic airplane autopilot just maintain flight level, speed, and heading? What are some other things it can do?


Recover from upsets is the big thing. Maintaining flight level, speed, and heading while upside down isn’t acceptable.

Levels of safety are another consideration, car autopilot’s don’t use multiple levels of redundancy on everything because they can stop without falling out of the sky.


That's still massively simpler than making a self-driving car.

It's trivially easy to fly a plane in straight level flight, to the extent that you don't actually need any automation at all to do it. You simply trim the aircraft to fly in the attitude you want and over a reasonable timescale it will do just that.


> It's trivially easy to fly a plane in straight level flight, to the extent that you don't actually need any automation at all to do it. You simply trim the aircraft to fly in the attitude

That seemingly shifts the difficulty from the autopilot to the airframe. But that’s not actually good enough, it doesn’t keep an aircraft flying when it’s missing a large chunk of wing for example. https://taskandpurpose.com/tech-tactics/1983-negev-mid-air-c...

Instead, you’re talking about the happy path and if we accept the happy path as enough there’s the weekend equivalents of self driving cars built using minimal effort, however being production worthy is about more than being occasionally useful.

Autopilot is difficult because you need to do several things well or people will defiantly die. Self driving cars are far more forgiving of occasional mistakes but again it’s the or people die bits that makes it difficult. Tesla isn’t actually ahead of the game, they are just willing to take more risks with their customers and the general public’s lives.


> Self driving cars are far more forgiving of occasional mistakes

I would say not, no.

It's almost impossible to crash a plane. There's nothing to hit except the ground, and you stay away from that unless you really really mean to get close.

It's very easy to crash a car, and if you do that most of the time you'll kill people outside the car, often quite a lot of them.

There are no production aircraft fitted with autopilots that can correct for breaking a wing off.


Autopilots have contributed to a significant number of crashes and that’s with a very safety conscious industry.

In a hypothetical Tesla style let’s take more risk approach, buggy autopilots can surprisingly quickly get into a situation at cruising altitude which isn’t recoverable before hitting the ground. What is the worst possible thing an autopilot could do in this situation is eye opening here.

> There are no production aircraft fitted with autopilots that can correct for breaking a wing off.

That was a production aircraft still in service. https://simpleflying.com/how-many-f-15-eagles-are-still-in-s...

Granted that specific case depends on the aircraft being a lifting body etc so obviously doesn’t extend to commercial aviation. But my point was lack of aerodynamic stability on its own isn’t enough that giving up is ok.


> Autopilots have contributed to a significant number of crashes and that’s with a very safety conscious industry.

"Contributed to", in the sense that the pilots decided to just blindly trust the autopilot and let it make a developing situation worse rather than, oh I don't know, maybe FLYING THE DAMN PLANE.

> buggy autopilots can surprisingly quickly get into a situation at cruising altitude which isn’t recoverable before hitting the ground

If you allow the autopilot to fly the plane into the ground, yes. If you're paying attention you ought to be able to recover just about anything, if most of the plane is still working. The vast majority of incidents where aircraft have departed controlled flight and crashed are because the pilots lost sight of the important thing - FLYING THE DAMN PLANE.

> But my point was lack of aerodynamic stability on its own isn’t enough that giving up is ok.

It's got nothing to do with aerodynamic stability. If you adjust the steering and suspension in a car correctly, it'll drive in a perfectly straight line with no user input for a surprisingly long way. With modern electronic power steering and throttle-by-wire systems it's actually surprisingly easy to turn an off-the-shelf car (even something cheap, secondhand, and quite old like a 2010s Vauxhall Corsa) into a simple line-following robot like we used to build at uni in the 80s and 90s in robotics class. Sure, you need a disused aerodrome to play with it, but it'll work.

There is the far greater problem that self-driving cars have to cope with a far more rapidly changing environment than an aircraft. A self-flying plane would be far easier to get right than a self-driving car.

A human driver can't just react, painfully slowly, in the way that current "self-driving" cars do, they have to anticipate and be "reacting" before the problem even begins to start. You do it yourself, even if you don't realise it. You hang back from that car because you know they're going to - there, right across two lanes, not so much as a glance in their mirror, what did I tell you? - they're going to do something boneheaded. That car's just pulled in, the passenger in the back is about to open their door right into your - nicely done, you moved out to the line and missed them by 50cm at least.

Self-driving cars can't do that, and probably never will. Self-flying aircraft won't need to do that.

And an autopilot is a surprisingly simple device that responds in simple and predictable ways to sensor inputs.


> "Contributed to", in the sense that the pilots decided to just blindly trust the autopilot and let it make a developing situation worse rather than, oh I don't know, maybe FLYING THE DAMN PLANE.

Excuses don’t save lives. You can’t trust pilots or drivers to always make the correct decision instantly. Any system designed in such a manner will get people killed.

> If you allow the autopilot to fly the plane into the ground, yes.

Things can be unrecoverable a full minute before impact. There’s some seriously harrowing NTSB reports, and that’s just what’s already happened possible failure modes are practically endless.


> Excuses don’t save lives. You can’t trust pilots or drivers to always make the correct decision instantly. Any system designed in such a manner will get people killed.

Okay, so what's your answer? Stick yet another computer in to go wrong and fly the plane into the ground when it gets the wrong idea about a situation? Add yet more sensors to the car to prevent the driver steering away from an obstacle because it thinks they're not using their indicators yet?

> Things can be unrecoverable a full minute before impact.

Can you find an example of one that isn't down to gross mechanical failure, or just plain Operator Idiocy?


> Okay, so what's your answer? Stick yet another computer in to go wrong and fly the plane into the ground when it gets the wrong idea about a situation? Add yet more sensors to the car to prevent the driver steering away from an obstacle because it thinks they're not using their indicators yet?

I’m not condemning the airline industry here, the safety conscious approach has done a good job over time especially in terms of redundancy. A major area of improvement is the way autopilots are communicating with pilots, but that’s a hard process.

The car industry isn’t doing nearly as well in terms of redundancy etc so there’s many obvious areas of improvement through solid engineering without changing anything fundamental. That said, communication is again lacking.

> Can you find an example of one that isn't down to gross mechanical failure, or just plain Operator Idiocy?

Operator Idiocy isn’t some clearly defined line, an aircraft with a moderate fuel leak can look like idiocy after the fact but it’s an easy mistake to make. That’s exactly the kind of thing autopilots could catch not just from fuel sensors but how the flight characteristics change as the aircraft gets lighter, but aircraft have happily flown into trouble over the ocean.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: