Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see you speak from experience. I feel like I'm watching the same cycle play out over and over again, which is that a new, transformative technology lands, people with a vested interest spend a lot of time denouncing it (your examples mostly land for me), the new technology gets over-hyped and fails to meet some bar and the haters all start crowing about how it's just B.S. and won't ever be useful, etc. etc.

Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.

With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".



Such a sane, nuanced take on new technologies. I wish more people were outspoken about holding these types of opinions.

It feels like the AI discourse is often dominated by irrationally exuberant AI boosters and people with an overwhelming, knee-jerk hatred of the technology, and I often feel like reading tech news is like watching two people who are both wrong argue with one another.


Moderates typically have a lot less to say than extremists and don't feel a need to have their passion heard by the world. The discussion ends up being controlled by the haters and hypers.

New technologies in companies commonly have the same pitfalls that burn out users. The companies have very little ability to tell if a technology is good or bad at the purchasing level. The c-levels that approve the invoices are commonly swayed not by the merits of the technology, but the persuasion of the salespeople or the fears of others in the same industries. This leads to a lot of technology that could/should be good being just absolute crap for the end user.

Quite often the 'best' or at least most useful technology shows up via shadow IT.


And a subgroup (or cousin?) of the exuberant AI boosters are the people absolutely convinced that LLM research leads to the singularity in the next 18-24 months.

I really do wish we could get to a place where the general consensus was something similar to what Anil wrote - the greatest gains and biggest pitfalls are realized by people who aren't experienced in whatever domain they're using it for.

The more experience you have in a given domain, the more narrow your use-cases for AI will be (because you can do a lot of things on your own faster than the time spent coming up with the right prompts and context mods), but paradoxically the better you will be at using the tools because of your increased ability to spot errors.

*Note: by "narrow" I don't mean useless, I just mean benefits typically accrue as speed gains rather than knowledge + speed gains.


Is AI like other technologies though? Most technologies require a learning curve that usually increases as the technology develops and adds features. They become "skills" in themselves. They are tools to be used; not the users of the tools themselves.

AI seems like the opposite to me. It is the technology that is "the learning curve" in the long term. Its whole point long term is to emulate learning/intelligence - it is trying to be the worker, not the workers tool (whether it suceeds or not is another story). The industry seems to treat it as another tech/tool/etc which you need experience/training in which I wonder is the right approach long term.

Many people will be wondering (incl myself) whether learning to use "AI" is really just an accessibility/interface problem. My time is valuable, should I bother if the productivity gains (which may only last a year or so before it changes again) outweigh the learning time/cost of developing tools/wrappers/etc? Everyone will have a different answer to this question based on their current tradeoffs.

I ask the question: If I don't need it right now (e.g. code is only 10-20% of my job for example), why bother learning it when the future AI will require even less intelligence/learning to use?


Unfortunately, thoughtful, nuanced takes don't make headlines, don't get into Harvard Business Review, and don't end up as memos on the CEO's desk. Breathless advocacy and knee-jerk dismissals get the clicks and those are the takes that end up bubbling to the top and influencing the decision makers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: