I feel most people are rooting for it to be a fad than actually believing that it is. We've barely started to hook these things up on our business processes and companies are investing heavily right now in doing so. Give it a few years.
Reasons for fad status IMHO, things companies are wanting to charge for that I'm willing to pay for: none. Features added to existing products that have made the product worse: actually more than zero.
Maybe this tech is great for people who aren't smart to make decisions and research for them. Maybe it's great for companies that plan to massively decimate labor costs. Maybe it's ...
The jury may still out as to what if any actual long lasting changes this tech is having to change our world for the better, but what I hope is abundantly clear by now is the stratospheric valuations people have slapped on this industry are unlikely to be justified any time soon.
Well i don't know the future, if it was a fad, this is how i would expect it to work out. Businesses start investing heavily in integrating into processes. Then the truth comes out - either it works, and that is great, or the businesses get burned. If they get burned then sentiment becomes negative and it becomes a past fad.
You're right its too early to know, but i think we are right on the cusp of finding out.
I think there is both fad and substance to AI. Some investments in AI are bad ideas that won't work out and so are a waste. However AI is still useful some some activities and those investments will change the business. In short like every other time some part of AI became the fad and went mainstream.
Part of the problem is the hopelessly broad umbrella term "AI".
Sure, ChatGPT is AI. StableDiffusion is AI. But...
A computer-controlled Zergling in StarCraft: Brood War is AI.
The OCR tools in your phone are AI.
The imminent collision detection in your car is AI.
Skynet is a (fictional) AI.
Etc.
This is a confusion being willfully fostered by companies like OpenAI (particularly the conflation of LLMs like ChatGPT with fictional "strong" AIs, or AGIs), and cavalierly ignored by the media, both tech-focused and mainstream media alike.
Some of these subsets of AI are 100% successful products already out there today—or even 20 years ago! Some have interesting possibilities. Some are most likely fads. But as long as we're talking about them all with the same label, and not drawing explicit distinctions in discussions like this, we're going to continue to have people arguing past each other because one of them is talking about OCR and the other is talking about Skynet.
Right. The fundamental difference is that LLMs are a technical achievement that almost no one expected to come this fast. Neural networks were long considered a dead approach—not anymore. In 2015, most of us thought we’d be where we are today in NLP circa 2040.
The business fad will no doubt end in a fiery crash as people discover the hard way the limitations of LLMs, but the underlying achievement is still real. This is more like 2001. Lots of dot-coms died either because they were silly or tragically ahead of their time, but the Internet never went away. (Unfortunately, it does seem to be evolving into a worse version of itself due to platform decay, but that’s another topic entirely.)
ML is used extensively in business and I don’t see any indication that it’s fading. I don’t think I saw any real effort to put VR into business processes. Hype around AR has popped up and fizzled quickly a few times, but still no real push.
What's happening right now is a fad,
even if some aspects of it end up making long term impacts.
The earlier AI fads of the 60s and 70s,
the expert systems of the 80s, and deep learning of the early 2000s died off but left a few bits of useful stuff.
Similar to much of the CASE and UML hype, we didn't get everything promised but we still got some useful IDE features and a recognizable way to express code diagrammatically as a documentation tool.
None. But companies are doing regardless of what we think. If it works, it works.
Verification of customer documents, monitoring and interpreting regulatory changes, matching resumes to open positions, support ticket triage, better automated customer support, ...
There are quite serious consequences for getting this wrong, especially systematically (making a few mistakes, you may get away with, but you don't want to be the bank of choice for the North Korean secret service, say), so, if it turns out not to work well, "but the magic robot said so" will not be seen as an excuse, and there will be fines and maybe prison sentences.
> monitoring and interpreting regulatory changes
As above. "We're too cheap to ask a lawyer so we had a magic robot tell us if it was okay" _absolutely_ will not fly.
> matching resumes to open positions
"Half of the people we've hired for the last two years are totally useless, how are we screening these resumes again?"
The customer support stuff is the only thing you've listed where a certain amount of failure is tolerable for many businesses (most customer support is already quite bad and often already uses questionable automation; if you have _good_ customer support you should be more cautious, but most companies don't). But using current LLMs for the important stuff? Nah, that's not going to go well.
I think you're misreading the situation by treating it as 'all or nothing'.
If I'm not mistaken, it's common for a legal team to analyze documents in an hierarchical fashion. A less senior person reads and highlights part of a document, and later a more senior one carefully analyze those points. The 'easy' part of the job that is the target for automation.
As for the verification of customer documents, oh boy, I've seem some stuff first hand and I can tell you that they are definitely not that careful.
This is the thing. Many of these Ive tried successfully. You need human oversight but as of today Ive been able to consistently:
1. Debug k8s and implement niche fixes
2. Write code faster
3. Fill out a slack update template from just talking to an LLM
4. Made a system for my partner to generate the first draft of her standardized report (required for assessment)
5. Draft email responses to clients, customers, strata council, lawyers.
6. Turn docs into mermaid diagrams
7. POC new features
8. Write api integration code from a docs url
The obvious caveat is you do need to still act as QA and know what your doing.
But the general improvements, productivity gains, and new possibilities are all real. I think your (hard earned) engineer cynicism is now getting in your way.
2. yes! but I'd like specific things that a lot of models don't do well quite yet, like accurate, complete citing and contextualizing with quotes across sources, etc.
3. Eh, pretty unexcited but I can see some limited utility.
4. 100% no.
5. Skeptical but there's definitely some good use cases here: smaller bug fixes and linting/vuln/anit-pattern scanning could be good applications, but I'm not sure if its better than the existing non-ai tools.
6. Maybe? If you mean code, existing refactor tools are pretty darn good and are deterministic, they do the thing or they don't. If the tool can't make strong guarantees about that, I'm not interested.
7. Yeah I think that's pretty good. Related: I think there's an application for detecting consistency issues between products/pages/etc that should have a unified design and UX.
8. That's basically #2?
9. I think execs will dump the tool the first time it makes the wrong decision, and I think people the tool reaches out to on behalf of the exec won't take those communications as seriously.
10. No. That's spam, full-stop.
11. No. User's aren't going to take it seriously and aren't going to respond in the same way. Will also be a reputational risk. "Would you mind taking a survey about your experience with our product?" followed by a chat conversation with an AI is going to be dragged online.
12. Absolutely no one wants this.
13. You mean as an accessibility feature? Because I could definitely see that.
I'll bring up that the original question was what ways would youwant AI integrated into products. Not "what ways are people integrating AI in a desperate chance to hop on the boat" (to borrow the metaphor from the article).
> but I'd like specific things that a lot of models don't do well quite yet, like accurate, complete citing and contextualizing with quotes across sources, etc.
Er, yeah, good luck with that. You're not going to get than from an LLM.
Some of them do citation okay-ish (which isn't sufficient, but suggests some ability).
But I more or less agree with your skepticism. If you looked over my comments on HN, you'd probably think I was a die-hard AI skeptic, but I'm really not, I just think the vast majority of people's understanding of and expectations of AI are disconnected from reality. I think people are viewing AI (and products, tools, innovations built with/around AI) as they hope it will be in 5 years, and not as it is right now.
I agree, but I think there's good reason to be skeptical of the effectiveness of current targeted online ads (in terms of actually getting people to buy products and services that are advertised), and I don't see how AI could make that better.
That being said, so far the people who buy online ads don't seem to actually care if they're effective or a worthwhile investment, so that may not actually matter.
"Google, whats the difference between a Compressor and a Gate?"
> A gate swings outwards, a compressor is generally used to reduce the volume of air, generally used to provide air flow through hoses to pneumatic tools.
Something that is wrong 90% of the time is useless.
As for stock prices, no comments from me.