Hacker Newsnew | past | comments | ask | show | jobs | submit | esics6A's commentslogin

“Profit is up 1000% and they’re creating some plants last month and hope to use advanced packaging in 2 days…” Wishful thinking and hyperbole yet again! Social media runs with these unsubstantiated claims with no verification. It’s boring and tiresome to see this versus the reality. They’re decades from being fully independent. It was only recently that Washington revoke Intel, AMD and NVIDIA licenses to export chips to Huawei.

Stop believing everything you see on social media. Think critically. Do the claims sound too good to be true? Probably are, we exist in a misinformation media environment.


Google is the new Yahoo and sliding into irrelevance. Layoffs like this cut the core engineering functions of a company. After a while it becomes harder and harder to innovate and create meaningful products. Offshore teams can never replicate or replace these functions. This is crazy and the executives are clueless. Bring back Larry and Sergey now before it’s too late!


What can be described as “Western Civilization” is a much bigger cultural zone than we define it. Historically people also never really divided the world this way. It’s just modern academics that screw these concepts up completely, have obvious agendas and lack context. Europe, North Africa and Middle East are all part of “The West”. The regions were only ever divided at various times by empires, language groups and religions. The author of this article is disingenuous with their argument at its core as a result. Musical traditions from Baghdad are still western. This is why music could influence other music because these were all western musical styles.

Edit: Wanted to add that the alphabet is a western linguistic concept that’s common to all western languages.


Musk wants to make sure that BYD has little or no competition. Kamikaze management yet again from another American company. Musk is such a genius!


I criticize Elon often and harshly, but putting off a low cost car might be the rational and realistic part of this set of decisions: The opportunity cost of cybertruck and other side quests Tesla has been on make doing a low-cost car something that should have been started 4 years ago, using the money that's been spent on a second rate robot or a semi truck that's good for hauling potato chips.


I criticise Apple often and harshly, but putting off a premium car might be the rational and realistic part of this set of decisions.

Personally, I think Tesla are left holding the can at this juncture ... and the next 5 years are going to be extremely challenging for them.

Margins eaten away at the premium level - with no hope of clawing them back at the lower cost mass level.


Agreed, and it shows that planning that takes into account project risk is very valuable. While Apple was spending what probably amounts to tens of billions on a car project, they never let that be a threat to their business or even to their valuation. So burning all that money was nearly a non-event, and didn't ding Tim Cook's reputation at all.

Tesla meanwhile is left short of the resources it will take to retain a leading position in EVs. The robotaxi announcement just adds risk.


> Margins eaten away at the premium level - with no hope of clawing them back at the lower cost mass level.

I'm somewhat surprised they don't try to make a car in the luxury segment. There'd be a learning curve -- expectations for comfort, build quality, and functionality are definitely several steps beyond what they've tried before -- but that's where the margins are, and some low-volume high-margin sales can pay for a lot of see-what-sticks-to-the-wall in the econobox market.


Market segmentation is a thing. Chasing price sensitive consumers doesn’t necessarily make you very profitable.


No but at the same time, Tesla aren't renowned for their build quality like VW.


Because you're looking for a very specific language or tool experience and not capabilities. This is a common mistake. Many developers might have loads of experience in the problem domain your company needs but not a specific language or framework. Like you do integrations for banking clients. You look for a Java developer and find someone who knows the framework you're using but they've only ever developed APIs for mobile apps. Instead look for the person who might have done some Java a lot of Cobol and has worked with mainframes and banking systems. That person might have even been a sysadmin and learned coding but they'll hit the ground running.


This is a bubble and one that will burst very hard. AI is the perfect technology for this. It's opaque and most investors (who barely understand tech in general) have no clue what it really does or how it works. This is the closest we've ever had to multiple large respected tech companies selling "snake oil" a cure all. The capabilities of AI they mention as if they're available today are literally many decades and generations away. Automating information workers, creatives and engineers will take AGI that's simply impossible with our technology.

When the AI bubble bursts I wouldn't be surprised if takes down major tech companies with it.


Can you be specific about what the bubble is?

I am getting 100x value out of my 30 chatgpt bucks. I am doing things that I could not have done pre-gpt4, being more productive by a factor of, idk, 1.25 maybe.

It's quite simply the largest/simplest productivity improvement in my life, so far. Given it's only going to get better, unless they are underpricing the service by a enormous margin (as in: defrauding shareholders margin) I have a hard time understanding what shape the bubble could possibly have.


I get that there are limitations with LLMs, but I don't understand people saying it has no value, just because it occasionally hallucinates. Over the past week I've used chatGpt to code not one, but two things that were completely beyond my knowledge (an auto delete js snippet, and a gnome extension that turns my dock red if my vpn turns off). These are just two examples. I've also used it to write a handy regex and write a better bash script.

LLMs are insanely helpful if you use them with their limitations in mind.


> LLMs are insanely helpful if you use them with their limitations in mind.

This depends on your use case. I can honestly tell that all the chat bot AIs don't "get" my kind of thinking about mathematics and programming.

Since some friend who is graduate student in computer science did not believe in my judgement, I verbally presented him some test prompts for programming task where I wanted the AI to help me (these are not the most representative ones for my kind of thinking, but are prompts for which it is rather easy to decide whether the AI is helpful or not).

He had to agree from the description alone that the AIs will have difficulties with these task, despite the fact that these are common, and very well-defined programming problems. He opined that these tasks are simply too complex for the existing AIs, and suggested that if I split these tasks into much smaller subtasks, the AI might be helpful. Let me put it this way: I personally doubt that if I stated the subtasks in a way in which I would organize the respective programs, the AI would be of help. :-)

What was just important for me was to able to convince the my counterpart that whether AIs are helpful or not for programming depends a lot on your kind of thinking about programming and your programming style. :-)


I would say that the ability to break a problem down into manageable chunks is the mark of a sr dev. I think of chatGpt as a jr that's read a lot but understands only a little. To crib Kurtzwell you gotta 'run with the machine'


> I would say that the ability to break a problem down into manageable chunks is the mark of a sr dev.

I believe that I am perfectly capable of doing this. But if I have to "babysit" the LLM, its helpfulness decreases.


This is a rather long post, I'm genuinely curious why you did not describe the problem that you want to solve. Is it too complex for even humans to understand?


Any chance you could share the prompt?

I on the otherhand feel like I am completely in sync with Copilot and ChatGPT. It is as if it always knows what I am thinking.


Don't take the following prompts literally, but think into the directions of:

"Create a simple DNS client using C++ running on Windows using IO Completion ports."

"Create a simple DNS client using C++ running on GNU/Linux using epoll."

"Write assembler code running on x86-64 running in ring 0 that sets up a minimal working page table in long mode."

"Write a simple implementation of the PS/2 protocol in C running on the Arduino Uno to handle a mouse|keyboard connected to it."

"Write Python code that solves the equivalence problem of word equivalence in the braid group B_n"

"Write C++|Java|C# code that solves the weighted maximunm matching matching problem in the case of a non-bipartite graph"

...

I experimented with such types of prompts in the past and the results were very disappointing.

All of these are tasks that I am interested in (in my free time), but would take some literature research to get a correct implementation, so some AI could theoretically be of help if it was capable of doing these tasks. But since for each of these tasks, I don't know all the required details from memory, the code that the AI generates has to be "quite correct", otherwise I have to investigate the literature; if I have to do that anyway, the benefit that the AI brings strongly decreases.


I tried the first one, but currently I don't have time to verify what it generated.

But I have done many Arduino/Raspberry PI things lately for the first time in my life and I feel like ChatGPT/Copilot has given me a huge boost even if it doesn't always give 100 percent code out of the box, it will give me a strong starting point where I can keep tweaking myself.


What did happen when you split the question into subtasks? What were those questions?


> LLMs are insanely helpful if you use them with their limitations in mind.

the fact that LLM responses can't be add supported (yet) make them much more valuable than internet search IMO. You have to pay for chatgpt because there's no ads. No ads no constant manipulation of content and your search to get more ads in front of you.

Having to pay for using genai is it's best selling point ironically.


This is only temporary. Ads will be put back in as soon as possible.


You're really generating $3000 per month from ChatGPT? Can you give a hint about what you've built that generates this kind of ROI?

I have only seen people making money in AI by selling AI products/promises to other people who are losing money. The practical uses of these tools still seem to be largely untapped outside of as enhanced search engines. They're great at that, but that does not have a return on value that is in proportion to current investment in this space.


> Can you give a hint about what you've built that generates this kind of ROI?

Sure. Absolutely nothing amazing: (Mostly) internal software for a medical business I am currently building.

It's just that the actual cost of hiring someone is even quite a bit higher, than what is printed on the paycheck and the risk attached to anyone leaving on a small team is huge (n=0 and n=1 is an insane difference). GPT4 has bridged the gap between being able to do something and not being able to do something at various points over the past year.

EDIT: And to be clear, while I won't claim "rockstar programmer", I have coded for roughly 20 years, which is the larger part of my life.


Just spoke to a restaurant group owner in Mexico who was able to eliminate their web developer because he can now ask ChatGPT to draft up a basic website.

The kicker? It couldn't do the interactive menu their old website did, so now clicking menu links to a PDF. Which is always, ALWAYS, better.


> he can now ask ChatGPT to draft up a basic website.

I'm pretty sure he could have done that with one of the thousands tools like Wix, many years before ChatGPT.


Even just looking at ChatGPT as a better frontend to the Wix help docs, ChatGPT empowered this restaurant owner to do the job themselves, rather than having to have a person do it. Which means that person is out of a job. Good for the restaurant owner, but bad for that person. Which means it's down to personal relationships and how you treat people and all those soft skills that aren't programming.


Pretty sure he still does that. Unless ChatGPT can now test and deploy a website as well as generate text.


yes, but which one of thouse thousand, how long would it take to learn how to use it, etc. Still less friction in just asking ChatGPT to do this via the same interface you ask it to do a bunch of other stuff.


It's better with accessibility?


Sorry, but why is pdf better than html? If pdf is better, would you prefer every website just downloaded a pdf to your phone when you visit their url, instead of serving you html? If not, why is it different for a restaurant menu?


It's better in the pragmatic sense of like, it's more likely to be updated. They already have a PDF or docx laying around because they had to design their print menu, so now they can just upload it. But yes, ideally the menu would be html and would be accurate and up to date and responsive on mobile.


This is just a +1 to the ROI discussion, but I'd say that AI tooling roughly doubles my development productivity.

Some of it's in asking ChatGPT: "Give me the 3 possible ways to implement X?" and getting something back I hadn't considered. A lot of it is in sort of "super code completion".

I use Cursor and the UI is very slick. If I'm stuck on something (like a method that's not working) I can highlight it and hit Cmd+L and it will explain the code and then suggest how to fix it.

Hit Cmd+K and it will write out the code for you. Also, gotten a lot of mileage out of writing out a rough version of something in a language I know and then getting the AI to turn that into something else (ex: Ruby to Lua).


You are only looking at one dimension. What is your hourly rate based on your salary. If ChatGPT saves you 10 hours a month that could easily be over $2000.


But that’s only true if it eventually puts an extra $2,000 in your pocket or an extra 10 hours in your life.

If you estimate that it saves hou 10 hours per month, but your salary stays the same and you don’t work less hours, did it really give you $2,000 in value?

Obviously I don’t know the details of OPs situation. Maybe they aren’t salaries. Maybe the work for themselves. Etc.. I just think people tend to over estimate the value of GPTs unless it is actually leaving them with more money in their pocket.


It's that "it's only going to get better" part that is driving the bubble, I think.

The market has this idea that over the next year, we're somehow going to have AI that's literally perfect. Yet that's not how technology works, it takes decades to get there.

It'd be like if the first LCD TV was invented, and all of a sudden everyone is expecting 8k OLED by the next year. It just doesn't work like that.


Fair – but again: If it just stayed frozen in the state it is now (and that assumption is about as unreasonable, as it being "perfect" in a year), it's already going to be tremendously useful for increasingly many people (when cost will go down, and they will, accessibility will go up) — at least until something better comes around.

For those who extract value right now, the simple alternative (just not using it) is never going to be the better choice again. It's transformative.


Yeah, but this is different because it's largely just money -> more GPUs -> more people looking at the problem -> better results. You can't stumble upon an 8K TV overnight but you can luck upon the right gold mining claim and you can luck upon some new training algorithm that changes the game immediately.


This assumes answers which go from 8/10 to 9/10 increase the cost linearly. The scale could be exponential or worse.


Do any AI companies actually turn a profit? I feel like the only real winner is Nvidia because they are selling shovels to the gold diggers, while all the gold diggers are trying to outspend each other without a business model that has any consideration for unit economics.


I love a prudent take on company money – but given how investing works and how young this entire thing is and the (to me) absolutely real value, I find it hard to be very worried about that part right now.

I can literally run a ballpark model on my MB Pro, right now, at marginal additional electrical cost. I will be the first to say that all of this (including GPT4) is still fairly garbage, but I don't know when there was the last time in the history of tech, where less fantasy to get from here to what will be good was required.


The thing is that the bigger business giants like MSFT or Amazon are probably profit quite nicely from AI. Smaller companies, not aligned with any big giants - probably not.


I’m really curious what is making you so much more productive. My experience with AI has largely been the opposite. Also curious how you’re using AI to make $3,000 per month more than you would without it.


I feel the same way. I think LLMs are neat, and I find them interesting from a technical standpoint, but I have yet to have them do anything for me that's more than just a novelty. Even things like Copilot, which I'll admit has impressed me a bit, doesn't feel like it would radically change my life, even if it was completely foolproof.


It's not that he is making $3k a month, he gaining the equivalent productive value of of $3k.


How is he measuring that, though?


Average time saved per task * Number of tasks * Cost of their time per hour.

At $100/hr (not unreasonable for Sr. SWE) - he just needs to save 30 hrs/mo.

Not to mention the opportunity cost of using that time on more impactful activities on their startup. AI can be a force multiplier for sure.


Then he is either making an extra $3,000/month or working 30 hours less per month. If the former isn’t true, I am extremely doubtful that the latter is.

Seems more likely that he is over estimating the value that LLMs are bringing him. Or he is an extreme outlier, which is why I was asking for further details


> largest/simplest productivity improvement in my life, so far

many productivity improvements in the last years: Internet Search, Internet Forums, Wikipedia, etc. LLMs and other AI models is continuation of the improvement of information processing.


There is very clear value in LLMs

The bubble is that it is not clear there is a $XXXB business in building or hosting them.

OpenAI is losing money hand over fist, open source models are becoming available that are on-par and so commoditize the market, etc.


The bubble is that every $1 in capital going to OpenAI/Nvidia is a $1 that cannot be invested anywhere else: Healthcare, Automotive, Education, etc. Of course OAI and Nvidia will invest those funds, but in areas beneficial purely to them. Meta has lost $20bn trying to make Horizon Worlds a success, and appears to have abandoned it.

Even government-led industrialization efforts in socialist economies led to actual products, like the production of the Yagan automobile in Chile in the 1970s[0].

We've already had a decade plus of sovereign wealth funds sinking tens of billions into Uber and autonomous driving. We still don't have those types of cars on the road and it's questionable whether self driving will even generate the economic growth multiplier that its investment levels should merit.

[0] https://journal.hkw.de/en/erinnerungen-an-den-yagan-allendes...


As well as the artificially increased valuations for every company with the .ai TLD for their landing page.


I did a quick check and it seems like the entire Uber and clone industry is net negative. Uber, Lyft, Didi, Grab seem to have lost more money than was invested and once they stabilize they look like mediocre businesses at a global scale (Uber's been banned from many jurisdictions for predatory practices and in many other jurisdictions it seems to trend towards being as expensive as taxis or more once profitability becomes a target).


Fair enough. I personally would have a hard time spotting an outsized lost opportunity value with confidence, if it existed.

It feels, though, that this argument could (maybe a be little too easily) be applied to any new industry sector, in horse-vs-car fashion.


When Meta "lost" 20bn, they actually spent it on salary. The employees then go out and buy things like the Tesla Automobile, an actual product.


This sounds like the broken window fallacy. You could use the same logic to suggest that Meta dump piles of cash on the sidewalk in front of their office - it’d circulate but it wouldn’t help them.


If the same $20bn was spent on fixing a bridge, people would spend those wages to boost economic activity AND have a fixed bridge that will improve output even more. Horizon Worlds isn't a productive use of capital in that regard.

It'd be one thing if they open-sourced their VR tech, some of that could lead to productive tech down the line, but as a private company, they're not obliged to do any of that.


Last night I asked ChatGPT 4 to help me write a quick bash script to find and replace a set of 20 strings across some liquid files with a set of 20 other strings. The strings were hardcoded, it knew exactly what they were in no unclear terms. I just wanted it to whip up a script that would use ripgrep and sed to find and replace.

First, it gave me a bash script that looked pretty much exactly like what I wanted at first glance. I looked if over, verified it even used sed correctly for macOS like I told it, and then tried to run it. No dice:

    replace.sh: line 5: designer_option_calendar.start_month.label: syntax error: invalid arithmetic operator (error token is ".start_month.label")
Not wanting to fix the 20 lines myself, I fed the error back to ChatGPT. It spun me some bullshit about the problem being the “declaration of [my] associative array, likely because bash tries to parse elements within the array that aren’t properly quoted or when it misinterprets special characters.”

It then spat out a “fixed” version of the script that was exactly the same, it just changed the name of the variable. Of course, that didn’t work so I switched tactics and asked it to write a python script to do what I wanted. The python script was more successful, but the first time it left off half of the strings I wanted it to replace, so I had to ask it to do it again and this time “please make sure you include all of the strings that we originally discussed.”

Another short AI example, this time featuring Mistral’s open source model on Ollama. I’d been interested in a script that uses AI to interpret natural language and turn it into timespans. Asking Mistral “if it’s currently 20:35, how much time remains until 08:00 tomorrow morning” had the model return its typical slew of nonsense and the answer of “13.xx hours”. This is obviously incorrect, though funnily enough when I plugged its answer into ChatGPT and asked it how it thought Mistral may have come to that answer, it understood that Mistral did not understand midnight on a 24 hour clock.

These are just some of my recent issues with AI in the past week. I don’t trust it for programming tasks especially — it gets F# (my main language) consistently wrong.

Don’t mistake me though, I do find it genuinely useful for plenty of tasks, but I don’t think the parent commenter is wrong calling it snake oil either. Big tech sells it as a miracle cure to everything, the magic robot that can solve all problems if you can just tell it what the problem is. In my experience, it has big pitfalls.


I have the same experience. Every time I try to have it code something that isn't completely trivial or all over the internet like quicksort, it always has bugs and invents calls to functions that don't exist. And yes, I'm using GPT-4, the best model available.

And I'm not even asking about an exotic language like F#, I'm asking it questions about C++ or Python.

People are out there claiming that GPT is doing all their coding for them. I just don't see how, unless they simply did not know how to program at all.

I feel like I'm either crazy, or all these people are lying.


> I feel like I'm either crazy, or all these people are lying.

With some careful prompting I've been able to get some decent code that is 95% usable out of the box. If that saves me time and changes my role there into code review versus dev + code review, that's a win.

If you just ask GPT4 to write a program and don't give it fairly specific guardrails I agree it spits out nearly junk.


> If you just ask GPT4 to write a program and don't give it fairly specific guardrails I agree it spits out nearly junk.

The thing is, if you do start drilling down and fixing all the issues, etc, is it a long term net time saver? I can't imagine we have research clarifying this question.


> People are out there claiming that GPT is doing all their coding for them. I just don't see how, unless they simply did not know how to program at all.

I doubt it and certainly not for anything beyond basic. I've seen (and tried GPT's for code input a lot) and often they come back with errors or weird implementations.

I made one request yesterday for a linear regression function (yes, because I was being lazy). So was chatGPT... It spat out a trashy broken function that wasn't even remotely close to working - more along the lines of pseudo code.

I complained saying "WTH, that doesn't even work" and it said "my apologies" and spits out a perfectly working accurate function! Go figure.

Others have turned to testing tips or threats, which is an interesting avenue: https://minimaxir.com/2024/02/chatgpt-tips-analysis/


Same experience here


I hear you. It's all pretty bad. I have spent half-days getting accustomed to and dealing with gpt garbage — but then I have done that plenty of times in my life, with my own garbage and that of co-workers.

On the margins it's getting stuff good enough, often enough, quick enough. But it very much transformed my coding experience from slow deliberation to a rocket ride: Things will explode and often. Not loving that part, but there's a reason we still have rockets.


Most of us don't ride nor want to ride rockets. We do use them to blow each other up, statistically.


> Most of us don't ride nor want to ride rockets.

The amount noise generated by pretty much anything new and shiny on this website would disagree with that.

> We do use them to blow each other up, statistically.

Very true — and yet :^)


I've had the same experience, but I usually get what I want. Admittedly, I'm using it to script around ffmpeg which is a huge pain in the ass.

That said, every single script it churns out is unsafe for files with spaces on the first go round. Like.. Ok. It's like having a junior programmer with no common sense available.


ChatGPT plus is approx 300 USD yearly. You are earning 30k (100x) USD more yearly due to ChatGPT? Or earning 100k with 30% use or your time?


You may be getting 100x value out of your 30 bucks.

Is that the only variable that one needs to consider to gauge if this is bubble territory?


Other variables is basically what I was asking for, in the first sentence of my comment.


How many GPUs are being delivered today and for how long will they be used / what's their life?

Who is funding the purchase of those GPUs?

If VC money then what happens if the startups don't make money?

Are users using AI-apps because they are free and dump them soon?

Isn't their competition in semiconductors? Won't we have chips-as-a-commodity soon? LLMs-as-a-commodity?

Is Big Tech spending all this money to create VALUE or just to survive the next phase of the technological revolution? (e.g. the AI rush)

If prices are high, and sales are high, and competition is still low -- then how much is nvidia actually worth? And if we don't now why is it selling for so many times earnings?

https://www.philoinvestor.com/p/downside-at-nvidia-and-the-n...


These are all potentially interesting questions, but do any of them offer a concise answers to the bubble-question or are we just speculating?


Can I paste links in replies here? I wrote an article about the AI bubble a few weeks ago.


Yes you can.



It's like the previous craze with blockchain. Everyone and their dog was about to use blockchain to do the most awesome thing ever. And even things that didn't really lend themselves to blockchain were suddenly presented as prime examples of how to use blockchain.

I'm not saying both technologies don't have their uses, but the hype around them is crazy and not healthy.


AI has massively increased my productivity. There are real users like me who will continue to pay up. This is not the same.


Like with blockchain, there are real uses, of course i totally agree!

But I was more thinking about the craze surrounding the whole thing, like with blockchain, you can see everyone trying to sell you AI for kinda everything.


Yeah, but how many people like you are out there? We'll find out in about 5 years.


Yep, there are similarities between the blockchain rush and the AI rush.

But in my last piece on AI, I said that AI is 50X Blockchain!


That answer inspired me to ask ChatGPT on how blockchain could enhance AI. And, to say the least, I wasn't disappointed!

It printed out a pretty long answer with several good points on how it could help to enhance AI.

There we have it! Blockchain is about to solve all problems we have today with AI! :D


"Blockchain solves this..."


This technology is already fully automating copywriters and almost replacing concept artists right now. However it is true that right now the valuations are for something far more than that, and so far it doesn't seem like LLMs will be able to do much more.

I was talking to someone that just retired from a programming position at a FANG and he seems to think that AGI (artificial general intelligence) is only a few years away just based off what he sees with ChatGPT and he's dumping all his money into AI stocks. The level of hype and over-extrapolation is so absurd, and the fact that it can affect someone with a technical background...

It really does seem like a bubble to me.


Isn't the role of a concept artist mainly to do worldbuilding and drawing second? AI does not seem to have a good world model, they make pretty pictures but they lack thought behind them.


Agreed, and I think there are a number of... over-enthusiastic executives with dollar signs in their eyes who are in for a rude awakening about this. It might sound great to replace your artistic staff with an LLM subscription, until you realize that you laid off all the creative vision with them. That isn't to say I think it'll go away though, I wouldn't be surprised if art students in the future are taught how to wrangle LLMs to supplement their own designs.


I really don't know the details.

You can see a concept artist here discovering he stopped getting work after a company splurted out that they switched to AI.

https://twitter.com/_Dofresh_/status/1709519000844083290

My guess is that a lot of people can have ideas, so you don't need an artist to bring them to life anymore.


P/E ratios (so far) say otherwise: all the big tech public companies have reasonable P/E ratios and are investing heavily their profits in contrast to, say, the dotcom bubble when over 40% of the companies had P/E ratios unsustainably over the moon.

Are there a gazillion companies riding the "AI everywhere" wave to raise money? yes, yes there are. Will most of them fail? sure.

But the big players are fine at the moment so there is nothing that can burst very hard (yet) and the difference is in the denominators which are, so far, going up.

Of the top ones only NVIDIA and Amazon have P/E ratios a bit too high and among the top 10 only AMD's is way too high.


> This is the closest we've ever had to multiple large respected tech companies selling "snake oil" a cure all.

The problem with this take is you can deliver real results. At current $dayjob we do the very dumbest thing which is text -> labels -> feedback -> fine_tune -> text... and surface them as part of our search offering and it's rocketed to the most useful customer feature in less than 6 months of rolling it out. Customers define labels that are meaningful to them, we have a general-purpose AI classify text according to those labels. Our users gleefully (which is shocking given our industry) label text for us (which we just feed into fine_tuning) because of just how fast they can see the results.

Like it's as grug brain as it gets and we bumbled into a feature that's apparently more valuable to our users than the rest of the product combined. Folks want us to sell it as a separate module and we're just hoping they don't realize it's 3 LLMs in a trenchcoat.


I agree that it's opaque and that's why I said that AI can be the ULTIMATE bubble.


It's a shame the phrase "black box" exists in machine learning when it could have been "black bubble".


This opinion is so crazy to me. How can it be a bubble if it's already providing me with 10x productivity gains? Things I've used GPT-4 to automate: replaying to all emails, writing entire books, doing all the legal paperwork for my new business, completing daily code challenges, generating shopping lists, writing birthday messages, and replying to my family group chat. And that's when the tech is still relatively new! Once the technology advances and I'm able to automate basically all of my daily tasks, who knows where I'll be.


Well that's your view, but we keep seeing incredible progress year on year.

Noone thought beating the game of Go was feasible.

Noone thought self-driving cars would actually work.

Noone predicted ChatGPT. See where we're at now with multi modal models.

And Sora.

The truth is that you can't predict anything anymore about this tech cause all expectations keep being blown away.

AGI may be right there, and that's what's driving the money.


> Noone thought self-driving cars would actually work.

Many people thought self driving WOULD work and that we'd be further along than we are now. We have vastly overestimated how far we'd be, and vastly underestimated how much time and effort it would actually take.

Self driving cars as they exist today are still mere toys compared to where the industry thought they were going to be. Look at Cruise, Waymo, Zoox, Uber's ex-self driving car division and others.

We are not anywhere near the self-driving autonomous cars we had hoped for.


> Noone thought beating the game of Go was feasible.

Oh, yes, we did, once we beat chess. It was just a matter of time.

> Noone thought self-driving cars would actually work.

And... they don't? Call me when I can buy a regular car where I can sleep while traveling 8 hours, driven by the car itself. We're probably "flying cars" away from that.


This doesn’t make any sense. If nobody thought the things were possible then why did they bother trying to do them?

And even spend as much time and resources as they did trying to do them?


> When the AI bubble bursts I wouldn't be surprised if takes down major tech companies with it.

Hopefully, knock wood. Maybe it will even slow down the genrral enshittification created by those major tech companies.


I think there is a bubble, but not in the traditional sense of this all being nonsense. Obviously, AI is going to be a world changer, but I think in a few years many companies are going find, at least for them, that their projects using AI are wildly unprofitable. Good AI is expensive. This could take years to play out, but when Nvidia is trading higher than Microsoft and Apple in total market cap, we'll be well into it. I don't rule that out.


Eh I don’t know. I find ChatGPT very useful atleast 10 times a day.


Wait does this mean that private corporations from the United States will compete against national governments in space? If so it’s history repeating itself and the private corporations will probably win again. Great news and proof that it’s working bravo!


Still somewhat different from Apollo since NASA had a bigger role then ?

https://www.construction-physics.com/p/building-apollo

Though notably here, the CEO is a former NASA cadre.


The problem is Chinese companies are subsidized by their government to manufacture things of little or no intrinsic or critical value. Automated vacuum cleaners and consumer drones are niche electronic novelties. Electric cars using solid state batteries are also novelty that will be obsolete once electric engines that use liquid fuels become mainstream (fuel-cells).

The purpose of subsiding what are zombie companies is to maximize employment to ensure internal stability. The wins these companies show are propaganda wins only and don’t make the country more competitive. Foreign manufacturing is also migrating out of China at an alarming rate as shown by falling exports and GDP growth.

None of the development in the Chinese technology sector is sustainable. These companies would never survive on their own without subsidies and are dependent on them. It’s a cascading failure waiting to happen in the Chinese economy and will likely be a global shock. At least the Americans may appear to take longer to develop winning companies but once they do they tend to be sustainable and long lasting as organic enterprises.

Edit: The American free market is working as intended because it rightly values robotic vacuums as useless devices.


> Electric cars using solid state batteries are also novelty that will be obsolete once electric engines that use liquid fuels become mainstream (fuel-cells).

This seems like a big statement, can other experts comment?


Current battery tech has a slow and steady progression of improvement. There is bound to be a market disruptor at some point in the future, but it’s far from a guarantee that it’s going to be hydrogen.

I believe the biggest hurdle to any changes to current battery tech is that it costs so much to develop an entirely new process and build factories. Most innovation is in the form of small adjustments. For hydrogen to overcome this hurdle it would have to either be extremely cheap or have some unique property. For cars I don’t see hydrogen having that much of an advantage, but maybe electric planes would be feasibly powered by hydrogen due to the much improved weight to energy ratio.


Yeah, I agree. The OP's statement is insane.

> electric engines that use liquid fuels become mainstream (fuel-cells)

That is surely hydrogen. Do they understand the conditions that are required to stored hydrogen as liquid, let alone natural gas?


you have so much incorrect views of Chinese companies, the technologies these companies have, what is actually happening on the ground in China. You also vastly underestimate the real complexity of making today's products, even as mundane as a hair dryer or a toy. Chinese manufacturing makes making them look easy, people think all you need is bunch of cheap labor and you are set. No it's not. Also, for white label products, examples like hair dryer, washing machines, air conditions, its the Chinese companies who design, build and test, the entire lifecycle of the product, importers buy them and slap on their own brand.

Think what goes into a hair dryer? Exterior design, looks good and functional. How you make the plastic cover, do the plastic injection molding? How you design all the internal parts, fan, motor housing, heating wire, power circuits, micro-controllers etc, and make sure everything fits. Some companies even do individual components themselves, like the brushless motor, or there is a Chinese supplier that makes them, which provides much faster time to response. Then do the testing for each component, electric, heat, water, moisture testing. Then design a mass manufacturing system with automation and human labor that achieves really high yield and low wasted materials. This is the hardest part, its easy to make a hair dryer by hand taking 100 human hours and make sure it works. It's much harder to make 1M hair dryers per month, that is going to be used in all sorts of environments and with all kind of abuse, make sure they work well for a number of years so customers don't return them, or you go bankrupt from recalls and warranty, and make sure you only have to throw out the absolute smallest number of manufacturing defects, and really control your cost structure so you still make a profit when importers are squeezing your price. Then the supply chain and logistics, shipping from suppliers and shipping to customers. Then create a number of products for different markets. China can manufacture for cheap, but people don't realize manufacture for cheap and at massive quantities is a technology itself. It's also management, business process, even company and worker culture. China doesn't have the cheapest labor cost. It's the combination of everything that produces a physical product with the level of quality, fit and finish at the price point.


No it's not real engineering at all but a different thing altogether. Engineering is the application of hard sciences like chemistry, physics and biology to solve real world problems. Software development can be part of what an engineer does to solve that problem but it's not engineering in itself. Software "engineering" is actually applied mathematics particularly logic to program a computer to complete tasks organized as algorithms. This is why some algorithms and functions in software can be proved via mathematical proofs.


Jokes on the writer because Buddhism originated in India and evolved from Hinduism and spread as far as Southern Russia and Central Asia in addition to East Asia and Southeast Asia where it become popular. Ignorant people everywhere these days get to write articles who don't have basic history lessons. We learned this in high school also about Ashoka and how the Indian emperor spread Buddhism literally everywhere in the world. But whatever racists aren't known for their learning, understanding or intelligence.


Buddhism did not evolve from Hinduism, nor even from its predecessor Brahmanism (calling the place it came from India is a also a bit of a stretch, but to a lesser degree):

* it may have actually been reacting first to Zoroastrianism and its idea that the absolute Truth and Lie can be known: https://press.princeton.edu/books/paperback/9780691176321/gr...

* even if not, then it was either a reaction against Brahmanism (https://ahandfulofleaves.files.wordpress.com/2012/02/how-bud... (PDF)), or at the very least an independent development: https://www.academia.edu/63732680/Early_Buddhism_and_its_Rel...

> We come to the conclusion that early Buddhism as a whole has developed independently from Brahmanism, with selective influences from Brahmanism and non-Vedic spiritual movements, altering and utilizing these influences for its own growth against its religious competition.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: