Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Of course, Pareto principle is at work here. In an adjacent field, self-driving, they are working on the last "20%" for almost a decade now. It feels kind of odd that almost no one is talking about self-driving now, compared to how hot of a topic it used to be, with a lot of deep, moral, almost philosophical discussions.


> The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

— Tom Cargill, Bell Labs

https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule


In my experience for enterprise software engineering, in this stage we are able to shrink the coding time with ~20%, depending on the kind of code/tests.

However CICD remains tricky. In fact when AI agents start building autonomous, merge trains become a necessity…


Love this quote. Kinda sad I didn’t see this earlier in my life.


> It feels kind of odd that almost no one is talking about self-driving now, compared to how hot of a topic it used to be

Probably because it's just here now? More people take Waymo than Lyft each day in SF.


It's "here" if you live in a handful of cities around the world, and travel within specific areas in those cities.

Getting this tech deployed globally will take another decade or two, optimistically speaking.


Given how well it seems to be going in those specific areas, it seems like it's more of a regulatory issue than a technological one.


Ah, those pesky regulations that try to prevent road accidents...

If it's not a technological limitation, why aren't we seeing self-driving cars in countries with lax regulations? Mexico, Brazil, India, etc.

Tesla launched FSD in Mexico earlier this year, but you would think companies would be jumping at the opportunity to launch in markets with less regulation.

So this is largely a technological limitation. They have less driving data to train on, and the tech doesn't handle scenarios outside of the training dataset well.


Indian, Mexican and Brazilian consumers have far less money to spend than their American counterparts. I would imagine that the costs of the hardware and data collection don't vary significantly enough to outweigh that annoyance.


Do we even know what % of Waymo rides in SF are completely autonomous? I would not be surprised if more of them are remotely piloted than they've let on...


My understanding is they don't have the capability to have a ride be flat-out remotely piloted in real time. If the car gets stuck and puts its hazards on, a human can intervene, look at the 360 view from the cameras, and then give the car a simple high-level instruction like "turn left here" or "it's safe to proceed straight." But they can't directly drive the car continuously.

And those moments where the car gives up and waits for async assistance are very obvious to the rider. Most rides in Waymos don't contain any moments like that.


That's interesting to hear. It may be completely true, I don't really know. The source of my skepticism, however, is that all of the incentives are there for them to not be transparent about this, and to make the cars appear "smarter" than they really are.

Even if it's just a high level instruction set, it's possible that that occurs often enough to present scaling issues. It's also totally possible that it's not a problem, only time will tell.

What I have in mind is the Amazon stores, which were sold as being powered by AI, but were actually driven by a bunch of low-paid workers overseas watching cameras and manually entering what people were putting in their carts.

https://www.businessinsider.com/amazons-just-walk-out-actual...


Can you name any of the specific regulations that robot taxi companies are lobbying to get rid of? As long as robotaxis abide by the same rules of the road as humans do, what's the problem? Regulations like you're not allowed to have robotaxis unless you pay me, your local robotaxi commissioner $3/million/year, aren't going to be popular with the populus but unfortunately for them, they don't vote, so I'm sure we'll see holdouts and if multiple companies are in multiple markets and are complaining about the local taxi cab regulatory commision, but there's just so much of the world without robotaxis right now (summer 2025) that I doubt it's anything mure than the technology being brand spanking new.


Maybe, but it's also going to be a financial issue eventually too

My city had Car2Go for a couple of years, but it's gone now. They had to pull out of the region because it wasn't making them enough money

I expect Waymo and any other sort of vehicle ridesharing thing will have the same problem in many places


But it seems the reason for that is that this is a new, immature technology. Every new technology goes through that cycle until someone figures out how to make it financially profitable.


This is a big moving of the goalposts. The optimists were saying Level 5 would be purchasable everywhere by ~2018. They aren’t purchasable today, just hail-able. And there’s a lot of remote human intervention.

And San Francisco doesn’t get snow.


Hell - SF doesn’t have motorcyclists or any vehicular traffic, driving on the wrong side of the road.

Or cows sharing the thoroughfares.

It should be obvious to all HNers that have lived or travelled to developing / global south regions - driving data is cultural data.

You may as well say that self driving will only happen in countries where the local norms and driving culture is suitable to the task.

A desperately anemic proposition compared to the science fiction ambition.

I’m quietly hoping I’m going to be proven wrong, but we’re better off building trains, than investing in level 5. It’s going to take a coordination architecture owned by a central government to overcome human behavior variance, and make full self driving a reality.


I'm in the Philippines now, and that's how I know this is the correct take. Especially this part:

"Driving data is cultural data."

The optimists underestimate a lot of things about self-driving cars.

The biggest one may be that in developing and global south regions, civil engineering, design, and planning are far, far away from being up to snuff to a level where Level 5 is even a slim possibility. Here on the island I'm on, the roads, storm water drainage (if it exists at all) and quality of the built environment in general is very poor.

Also, a lot of otherwise smart people think that the increment between Level 4 and Level 5 is the same as that between all six levels, when the jump from Level 4 to Level 5 automation is the biggest one and the hardest to successfully accomplish.


Level 5 is a pipe dream. Or if I’m being charitable it’s un-ambitious.

The goal for a working L5 should be “if piloting a rickshaw, will it be able to operate as a human owner in normal traffic.”


Yes, but they are getting good at chasing 9s in the US, those skills will translate directly to chasing 9s outside the US, and frankly the "first drafts" did quite a bit better than I'd have expected even six months ago

Guangzhou: https://www.youtube.com/watch?v=3DWz1TD-VZg

Paris: https://www.youtube.com/watch?v=iN9nu-IkS1w

Rome: https://www.youtube.com/watch?v=4Zg3jc90JTI


I’m rejecting the assertion that the data covers a physics model - which would be invariant across nations.

I’m positing that the models encode cultural decision making norms- and using global south regions to highlight examples of cases that are commonplace but challenge the feasibility of full autonomous driving.

Imagine an auto rickshaw with full self driving.

If in your imagination, you can see a level 5 auto, jousting for position in Mumbai traffic - then you have an image which works.

It’s also well beyond what people expect fully autonomous driving entails.

At that point you are encoding cultural norms and expectations around rule/law enforcement.


You're not wrong on the "physics easy culture hard" call, just late. That was Andrej Karpathy's stated reason for betting on the Tesla approach over the Waymo approach back in 2017, because he identified that the limiting factor would be the collection of data on real-world driving interactions in diverse environments to allow learning theories-of-mind for all actors across all settings and cultures. Putting cameras on millions of cars in every corner of the world was the way to win that game -- simulations wouldn't cut it, "NPC behavior" would be their downfall.

This bet aged well: videos of FSD performing very well in wildly different settings -- crowded Guangzhou markets to French traffic circles to left-hand-drive countries -- seem to indicate that this approach is working. It's nailing interactions that it didn't learn from suburban America and that require inferring intent using complex contextual clues. It's not done until it's done, but the god of the gaps retreats ever further into the march of nines and you don't get credit for predicting something once it has already happened.


Thanks, I appreciate your take. For what it’s worth, I made this point since the start of the FSD/l5/tesla launch.

I liked the use of the God of the gaps - an effective analogy for the counter position.

I’m rejecting the idea of the march of the 9s eventually getting to FSD - the cultural norms issue is about decision making not about physics.

Eg - You have to decide how aggressively to drive, overtake, or jockey for position.

My estimation is that this is not solvable by on board decision making, because that would be accepting unacceptable legal risk.


It snows like 40% of the year where I live. I don’t think the current iteration of self driving cars is even close to handling that.


Most people live within a couple hours of a city though, and I think we'll see robot taxis in a majority of continents by 2035 though. The first couple cities and continents will take the longest, but after that it's just a money question, and rich people have a lot of money. The question then is: is the taxi cab consortium, which still holds a lot of power, despite Uber, in each city the in world, large enough to prevent Waymo from getting a hold, for every city in the world that Google has offices in.


Yeah where they have every inch of SF mapped, and then still have human interventions. We were promised no more human drivers like 5-7 years ago at this point.


Human interventions.

High speed connectivity and off vehicle processing for some tasks.

Density of locations to "idle" at.

There are a lot of things that make all these services work that means they can NOT scale.

These are all solvable but we have a compute problem that needs to be addressed before we get there, and I haven't seen any clues that there is anything in the pipeline to help out.


The typical Lyft vehicle is a piece of junk worth less than $20k, while the typical Waymo vehicle is a pretend luxury car with $$$ of equipment tacked on.

Waymo needs to be proving 5-10x the number of daily rides as Lyft before we get excited


I suspect most gig drivers don't fully account for the cost of running their car, so these services are also being effectively subsidized by their workers.

You can provide almost any service at a loss, for a while, with enough money. We shouldn't get excited until Waymo starts turning an actual profit.


Their cars cost waymo money


Well, if we say these systems are here, it still took 10+ years between prototype and operational system.

And as I understand it; These are systems, not individual cars that are intelligent and just decide how to drive from immediate input, These system still require some number of human wranglers and worst-case drivers, there's a lot of specific-purpose code rather nothing-but-neural-network etc.

Which to say "AI"/neural nets are important technology that can achieve things but they can give an illusion of doing everything instantly by magic but they generally don't do that.


It’s past the hype curve and into the trough of disillusionment. Over the next 5,10,15 years (who can say?) the tech will mature out of the trough into general adoption.

GenAI is the exciting new tech currently riding the initial hype spike. This will die down into the trough of disillusionment as well, probably sometime next year. Like self-driving, people will continue to innovate in the space and the tech will be developed towards general adoption.

We saw the same during crypto hype, though that could be construed as more of a snake oil type event.


The Gartner hype cycle assumes a single fundamental technical breakthrough, and describes the process of the market figuring out what it is and isn't good for. This isn't straightforwardly applicable to LLMs because the question of what they're good for is a moving target; the foundation models are actually getting more capable every few months, which wasn't true of cryptocurrency or self-driving cars. At least some people who overestimate what current LLMs can do won't have the chance to find out that they're wrong, because by the time they would have reached the trough of disillusionment, LLM capabilities will have caught up to their expectations.

If and when LLM scaling stalls out, then you'd expect a Gartner hype cycle to occur from there (because people won't realize right away that there won't be further capability gains), but that hasn't happened yet (or if it has, it's too recent to be visible yet) and I see no reason to be confident that it will happen at any particular time in the medium term.

If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?


> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?

Lots of pre-internet technologies went through this curve. PCs during the clock speed race, aircraft before that during the aeronautics surge of the 50s, cars when Detroit was in its heydays. In fact, cloud computing was enabled by the breakthroughs in PCs which allowed commodity computing to be architected in a way to compete with mainframes and servers of the era. Even the original industrial revolution was actually a 200-year ish period where mechanization became better and better understood.

Personally I've always been a bit confused about the Gartner Hype Cycle and its usage by pundits in online comments. As you say it applies to point changes in technology but many technological revolutions have created academic, social, and economic conditions that lead to a flywheel of innovation up until some point on an envisioned sigmoid curve where the innovation flattens out. I've never understood how the hype cycle fits into that and why it's invoked so much in online discussions. I wonder if folks who have business school exposure can answer this question better.


> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like.

We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.

The improvements we are seeing are related to engineering and value added services. This is why "agents" are the latest buzzword most marketing is clinging on. This is expected, and good, in a sense. The tech is starting to deliver actual value as it's maturing.

I reckon AI companies can still squeeze out a few years of good engineering around the current generation of tools. The question is what happens if there are no ML breakthroughs in that time. The industry desperately needs them for the promise of ASI, AI 2027, and the rest of the hyped predictions to become reality. Otherwise it will be a rough time when the bubble actually bursts.

[1]: https://llm-stats.com/


The problem with LLMs and all other modern statistical large-data-driven solutions’ approach is that it tries to collapse the entire problem space of general problem solving to combinatorial search of the permutations of previously solved problems. Yes, this approach works well for many problems as we can see with the results with huge amount of data and processing utilized.

One implicit assumption is that all problems can be solved with some permutations of existing solutions. The other assumption is the approach can find those permutations and can do so efficiently.

Essentially, the true-believers want you to think that rearranging some bits in their cloud will find all the answers to the universe. I am sure Socrates would not find that a good place to stop the investigation.


Right. I do think that just the capability to find and generate interesting patterns from existing data can be very valuable. It has many applications in many fields, and can genuinely be transformative for society.

But, yeah, the question is whether that approach can be defined as intelligence, and whether it can be applicable to all problems and tasks. I'm highly skeptical of this, but it will be interesting to see how it plays out.

I'm more concerned about the problems and dangers of this tech today, than whatever some entrepreneurs are promising for the future.


> We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.

This isnt just a software problem. IF you go look at the hardware side you see that same flat line (IPC is flat generation over generation). There are also power and heat problems that are going to require some rather exotic and creative solutions if companies are looking to hardware for gains.


The Gartner hype cycle is complete nonsense, it's just a completely fabricated way to view the world that helps sell Gartner's research products. It may, at times, make "intuitive sense", but so does astrology.

The hype cycle has no mathematical basis whatsoever. It's marketing gimmick. It's only value in my life has been to quickly identify people that don't really understand models or larger trends in technology.

I continue to be, but on introspection probably shouldn't be, surprised that people on HN treat is as some kind of gospel. The only people who should respected are other people in the research marketing space as the perfect example of how to dupe people into paying for your "insights".


Could you please expand on your point about expanding scopes? I am waiting earnestly for all the cheaper services that these expansions promise. You know cheaper white-collar-services like accounting, tax, and healthcare etc. The last reports saw accelerating service inflation. Someone is lying. Please tell me who.


Hence why I said potential applications. Each new generation of models is capable, according to evaluations, of doing things that previous models couldn't that prima facie have potential commercial applications (e.g., because they are similar to things that humans get paid to do today). Not all of them will necessarily work out commercially at that capability level; that's what the Gartner hype cycle is about. But because LLM capabilities are a moving target, it's hard to tell the difference between things that aren't commercialized yet because the foundation models can't handle all the requirements, vs. because commercializing things takes time (and the most knowledgeable AI researchers aren't working on it because they're too busy training the next generation of foundation models).


It sounds like people should just ignore those pesky ROI questions. In the long run, we are all dead so let’s just invest now and worry about the actual low level details of delivering on the economy-wide efficiency later.

As capital allocators, we can just keep threatening the worker class with replacing their jobs with LLMs to keep the wages low and have some fun playing monopoly in the meantime. Also, we get to hire these super smart AI researchers people (aka the smartest and most valuable minds in the world) and hold the greatest trophies. We win. End of story.


It's saving healthcare costs for those who solved their problem and never go in which would not be reflected in service inflation costs.


Back in my youthful days, educated and informed people chastised using the internet to self-diagnose and self-treat. I completely missed the memo on when it became a good idea to do so with LLMs.

Which model should I ask about this vague pain I have been having in my left hip? Will my insurance cover the model service subscription? Also, my inner thigh skin looks a bit bruised. Not sure what’s going on? Does the chat interface allow me to upload a picture of it? It won’t train on my photos right?


> or if it has, it's too recent to be visible yet

It's very visible.

Silicon Valley, and VC money has a proven formula. Bet on founders and their ideas, deliver them and get rich. Everyone knows the game, we all get it.

Thats how things were going till recently. Then FB came in and threw money at people and they all jumped ship. Google did the same. These are two companies famous for throwing money at things (Oculus, metaverse, G+, quantum computing) and right and proper face planting with them.

Do you really think that any of these people believe deep down that they are going to have some big breath through? Or do you think they all see the writing on the wall and are taking the payday where they can get it?


It doesn't have to be "or". It's entirely possible that AI researchers both believe AI breakthroughs are coming and also act in their own financial self interest by taking a lucrative job offer.


Liquidity in search of the biggest holes in the ground. Whoever can dig the biggest holes wins. Why or what you get out of digging the holes? Who cares.


The critics of the current AI buzz certainly have been drawing comparisons to self driving cars as LLMs inch along with their logarithmic curve of improvement that's been clear since the GPT-2 days.

Whenever someone tells me how these models are going to make white collar professions obsolete in five years, I remind them that the people making these predictions 1) said we'd have self driving cars "in a few years" back in 2015 and 2) the predictions about white collar professions started in 2022 so five years from when?


> said we'd have self driving cars "in a few years" back in 2015

And they wouldn't have been too far off! Waymo became L4 self-driving in 2021, and has been transporting people in the SF Bay Area without human supervision ever since. There are still barriers — cost, policies, trust — but the technology certainly is here.


People were saying we would all be getting in our cars and taking a nap on our morning commute. We are clearly still a pretty long ways off from self-driving being as ubiquitous as it was claimed it would be.


There are always extremists with absurd timelines on any topic! (Didn't people think we'd be on Mars in 2020?) But this one? In the right cities, plenty of people take a Waymo morning commute every day. I'd say self-driving cars have been pretty successful at meeting people's expectations — or maybe you and I are thinking of different people.


The expectation of a "self-driving car" is that you can get in it and take any trip that a human driver could take. The "in certain cities" is a huge caveat. If we accept that sort of geographical limitation, why not say that self-driving "cars" have been a thing since driverless metro systems started showing up in the 1980s?


And other people were a lot more moderate but still assumed we'd get self-driving soon, with caveats, and were bang on the money.

So it's not as ubiquitous as the most optimistic estimates suggested. We're still at a stage where the tech is sufficiently advanced that seeing them replace a large proportion of human taxi services now seems likely to have been reduced to a scaling / rollout problem rather than primarily a technology problem, and that's a gigantic leap.


People were doing that in their tesla years ago and making the news for sleeping on the 5


Reminds me of electricity entering the market and the first DC power stations setup in New York to power a few buildings. It would have been impossible to replicate that model for everyone. AC solved the distance issue.

That's where we are at with self driving. It can only operate in one small area, you can't own one.

We're not even close to where we are with 3d printers today or the microwave in the 50s.


No, it can operate in several small areas, and the number of small areas it can operate in is a deployment issue. It certainly doesn't mean it is solved, but it is largely solved for a large proportion of rides, in as much as they can keep adding new small areas for a very long time without running out of growth-room even if the technology doesn't improve at all.


Okay, but the experts saying self driving cars were 50 years out in 2015 were wrong too. Lots of people were there for those speeches, and yet, even the most cynical take on Waymo, Cruise and Zoox’s limitations would concede that the vehicles are autonomous most of the time in a technologically important way.

There’s more to this than “predictions are hard.” There are very powerful incentives to eliminate driving and bloated administrative workforces. This is why we don’t have flying cars: lack of demand. But for “not driving?” Nobody wants to drive!


I think people don't realize how much models have to extrapolate still, which causes hallucinations. We are still not great at giving all the context in our brain to LLMs.

There's still a lot of tooling to be built before it can start completely replacing anyone.


It doesn't have to "completely" replace any individual employee to be impactful. If you have 50 coders that each use AI to boost their productivity by 10%, you need 5 fewer coders. It doesn't require that AI is able to handle 100% of any individual person's job.


How profound. No one has ever posted that exact same thought before on here. Thank you.


"I don't get all the interest about self-driving. That tech has been dead for years, and everyone is talking about that tech. That tech was never that big in therms of life... Thank you for your attention to this matter"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: