> It might take years until the general consensus is negative about the effects of these tools.
The only thing I'm seeing offline are people who already think AI is trash, untrustworthy, and harmful, while also occasionally being convenient when the stakes are extremely low (random search results mostly) or as a fun toy ("Look I'm a ghibli character!")
I don't think it'll take long for the masses to sour to AI and the more aggressively it's pushed on them by companies, or the more it negatively impacts their life when someone they depend on and should know better uses it and it screws up the quicker that'll happen.
I work in Customer Success so I have to screenshare with a decent number of engineers working for customers - startups and BigCos.
The number of them who just blindly put shit into an AI prompt is incredible. I don't know if they were better engineers before LLMs? But I just watch them blindly pass flags that don't exist to CLIs and then throw their hands up. I can't imagine it's faster than a (non-LLM) Google search or using the -h flag, but they just turn their brains off.
An underrated concern (IMO) is the impact of COVID on cognition. I think a lot of people who got sick have gotten more tired and find this kind of work more challenging than they used to. Maybe they have a harder time "getting in the zone".
Personally, I still struggle with Long COVID symptoms. This includes brain fog and difficulty focusing. Before the pandemic I would say I was in the top 10% of engineers for my narrow slice of expertise - always getting exceptional perf reviews, never had trouble moving roles and picking up new technologies. Nowadays I find it much harder to get started in the morning, and I have to take more breaks during the day to reset my focus. At 5PM I'm exhausted and I can't keep pushing solving a problem into the evening.
I can see how the same kind of cognitive fatigue would make LLM "assistance" appealing, even if it's wrong, because it's so much less work.
Reading this, I'm wondering if I'm suffering "Long Covid"
I've recently had tons of memory and brain fog. I thought it was related to stress, and it's severe enough that I'm on medical leave from work right now
My memory is absolutely terrible
Do you know if it is possible to test or verify if it's COVID related?
I haven't had a lot of success so far in getting a diagnosis, there's a lot of different possible things that can be wrong. Chronic Fatigue Syndrome is one place to start. I'm seeing an allergist about MCAS, I've had limited success taking antihistamines and anti-inflammatory drugs.
Mostly you talk to your doctor and read stuff and advocate for more testing to figure out why you're not able to function like before. Even if it's not "Long COVID" it definitely sounds like something is causing these problems and you should get it looked at.
> An underrated concern (IMO) is the impact of COVID on cognition
Car accidents came down from the Covid uptick but only slightly. Aviation... ugh.
And there is some evidence it accelerates Altzheimer's and other dementias. We are so screwed.
This is precisely the problem: users still need to screen and reason about results of LLMs. I am not sure what is generating this implied permission structure, but it does seem to exist.
(I don't mean to imply that parent doesn't know this, it just seems worth saying explicitly)
Doesn’t matter. If they feel “good enough” that’s already “good enough”. Super majority of the world doesn’t revolve around truth seeking, fact -checking or curiosity.
The things I have noted offline included a HK case where someone got a link to a zoom call with what seemed to be his team mates and CFO, and then transferring money as per the CFOs instructions.
The error here was to click on a phishing email.
But something I have seen myself is Tim Cook talking about a crypto coin right after the 2024 Apple keynote, on a YT channel that showed the Apple logo. It took me a bit to realize and reassure myself that it was a scam. Even though it was a video of the shoulders up.
The bigger issue we face isn’t the outright fraud and scamming, it’s that our ability to make out fakes easily is weakened - the Liar’s dividend.
It’s by default a shot in the arm for bullshit and lies.
On some days I wonder if the inability to sort between lies, misinformation, initial ideas, fair debate, argument, theory and fact at scale - is the great filter.
The only thing I'm seeing offline are people who already think AI is trash, untrustworthy, and harmful, while also occasionally being convenient when the stakes are extremely low (random search results mostly) or as a fun toy ("Look I'm a ghibli character!")
I don't think it'll take long for the masses to sour to AI and the more aggressively it's pushed on them by companies, or the more it negatively impacts their life when someone they depend on and should know better uses it and it screws up the quicker that'll happen.