It's not in the interests of the UAE to improve. There's the (possibly misattributed? but topical nonetheless) quote by the previous emir of Dubai:
> My grandfather rode a camel, my father rode a camel, I drive a Mercedes, my son drives a Land Rover, his son will drive a Land Rover, but his son will ride a camel.
They want to prolong the Land Rover phase as long as possible.
Do folks see a mention of Claude as indicative of an ad? I usually say it because at work we've got a couple different options and I like to mention which one in particular I was using. But maybe I'll just start saying AI on forums unless someone asks me to specify.
I was worried about skill atrophy. I recently started a new job, and from day 1 I've been using Claude. 90+% of the code I've written has been with Claude. One of the earlier tickets I was given was to update the documentation for one of our pipelines. I used Claude entirely, starting with having it generate a very long and thorough document, then opening up new contexts and getting it to fact check until it stopped finding issues, and then having it cut out anything that was granular/one query away. And then I read what it had produced.
It was an experiment to see if I could enter a mature codebase I had zero knowledge of, look at it entirely through an AI, and come to understand it.
And it worked! Even though I've only worked on the codebase through Claude, whenever I pick up a ticket nowadays I know what file I'll be editing and how it relates to the rest of the code. If anything, I have a significantly better understanding of the codebase than I would without AI at this point in my onboarding.
I have never learned so quickly in my entire life than to post a forum thread in its entirety into a extended think LLM and then be allowed to ask free form questions for 2 hours straight if I want to. Having my questions answered NOW is so important for me to learn. Back in the day by the time I found the answer online I forgot the question
Same. I work in the film industry, but I’ve always been interested in computers and have enjoyed tinkering with them since I was about 5. However, coding has always been this insurmountably complicated thing- every time I make an effort to learn, I’m confronted with concepts that are difficult for me to understand and process.
I’ve been 90% vibe coding for a year or so now, and I’ve learned so much about networking just from spinning up a bunch of docker containers and helping GPT or Claude fix niggling issues.
I essentially have an expert (well, maybe not an expert but an entity far more capable than I am on my own) who’s shoulder I can look over and ask as many questions I want to, and who will explain every step of the process to me if I want.
I’m finally able to create things on my computer that I’ve been dreaming about for years.
Some people talk like skill atrophy is inevitable when you use LLMs, which strikes me as pretty absurd given that you are talking about a tool that will answer an infinite number of questions with infinite patience.
I usually learn way more by having Claude do a task and then quizzing it about what it did than by figuring out how to do it myself. When I have to figure out how to do the thing, it takes much more time, so when I'm done I have to move on immediately. When Claude does the task in ten minutes I now have several hours I can dedicate entirely to understanding.
You lose some, you win some. The win could be short-term much higher, however imagine that the new tool suddenly gets ragged pulled from under your feet. What do you do then? Do you still know how to handle it the old way or do you run into skill atrophy issues? I’m using Claude/Codex as well, but I’m a little worried that the environment we work in will become a lot more bumpy and shifty.
> however imagine that the new tool suddenly gets ragged pulled from under your feet
When you have a headache, do you avoid taking ibuprofen because one day it may not be available anymore? Two hundred years ago, if you gave someone ibuprofen and told them it was the solution for 99% of the cases where they felt some kind of pain, they might be suspicious. Surely that's too good to be true.
But it's not. Ibuprofen really is a free lunch, and so is AI. It's weird to experience, but these kinds of technologies come around pretty often, they just become ubiquitous so quickly that we forget how we got by without them.
The "infinite patience" thing I find particularly interesting.
Every now and then I pause before I ask an LLM to undo something it just did or answer something I know it answered already, somewhere. And then I remember oh yeah, it's an LLM, it's not going to get upset.
I used to speak Russian like I was born in Russia. I stopped talking Russian … every day I am curious ans responsible but I can hardly say 10 words in Russian today. if you don’t use it (not just be curious and responsible) you will lose it - period.
Programming language is not just syntax, keywords and standard libraries, but also: processes, best practices and design principles. The latter group I guess is more difficult to learn and harder to forget.
I respectfully completely disagree. not only will you just as easily lose thr processed, best practices and design principles but they will be changing over time (what was best practice when I got my first gig in 1997 is not a best practice today (even just 4-5 years ago not to go all the back to the 90’s)). all that is super easy to both forget and lose unless you live it daily
More fair comparison would be writing/talking about Russian language in English. That way you'd still focus on Russian. Same way with programming - it's not like you stop seeing any code. So why should you forget it?
For example, Claude was very eager to include function names, implementation details, and the exact variables that are passed between services. But all the info I need for a particular process is the names of the services involved, the files involved, and a one-sentence summary of what happens. If I want to know more, I can tell Claude to read the doc and find out more with a single query (or I can just check for myself).
Are you sure you would know if it didn't work? I use Claude extensively myself, so I'm not saying this from a "hater" angle, but I had 2 people last week who believe themselves to be in your shoes send me pull requests which made absolutely no sense in the context of the codebase.
No, it hasn't. I did not have a problem before AI with people sending in gigantic pull requests that made absolutely no sense, and justifying them with generated responses that they clearly did not understand. This is not a thing that used to happen. That's not to say people wouldn't have done it if it were possible, but there was a barrier to submitting a pull request that no longer exists.
It's good that it's working for you but I'm not sure what this has to do with skill atrophy. It sounds like you never had this skill (in this case, working with that particular system) to begin with.
>I have a significantly better understanding of the codebase than I would without AI at this point in my onboarding
One of the pitfalls of using AI to learn is the same as I'd see students doing pre-AI with tutoring services. They'd have tutors explain the homework to do them and even work through the problems with them. Thing is, any time you see a problem or concept solved, your brain is tricked into thinking you understand the topic enough to do it yourself. It's why people think their job interview questions are much easier than they really are; things just seem obvious when you've thought about the solution. Anyone who's read a tutorial, felt like they understood it well, and then struggled for a while to actually start using the tool to make something new knows the feeling very well. That Todo List app in the tutorial seemed so simple, but the author was making a bunch of decisions constantly that you didn't have to think about as you read it.
So I guess my question would be: If you were on a plane flight with no wifi, and you wanted to do some dev work locally on your laptop, how comfortable would you be vs if you had done all that work yourself rather than via Claude?
> If you were on a plane flight with no wifi, and you wanted to do some dev work locally on your laptop, how comfortable would you be vs if you had done all that work yourself rather than via Claude?
Probably about as comfortable as I would be if I also didn't have my laptop and instead had to sketch out the codebase in a notebook. There's no sense preparing for a scenario where AI isn't available - local models are progressing so quickly that some kind of AI is always going to be available.
- You know, Lindsay, as a software engineering consultant, I have advised a number of companies to explore closing their source, where the codebase remains largely unchanged but secure through obscurity.
- Well, did it work for those companies?
- No, it never does. I mean, these companies somehow delude themselves into thinking it might, but... but it might work for us.
> Like I said… skipping ahead, let me guess, For The Children; we must block local models, AI except for The NYSE Chosen Ones, encryption, and must have digital ID to use everything. Tell me this article is in favor of anything else.
No, delisting online nudify apps will take care of 99% of this. There's no reason for them to exist.
While that might be the case for now, as the offender would need access to a PC with a NVIDIA GPU, that will not hold for long.
Once you can run decent image generation models on your phone, what shall we do? Prohibit apps that allow you to run custom models? Add a mandatory safety check?
It seems like a pointless exercise to me - better punish the actual crime rather than try to regulate the tech.
Prohibiting apps from shipping with models that can nudify people seems reasonable. Custom models that can do it will be banned from search results and major sites. Just like child porn, they'll exist online but be quite difficult to find.
There is significant societal value in preventing crimes rather than just punishing them.
The prevention here would require to take down existing models that are already widely distributed and mostly used lawfully.
And even if you outlawed the distribution of uncensored models, we face the next question when you can do the fine tuning directly on the phone and someone makes an accessible app.
Do we then prohibit apps that allow you to fine-tune a model, do we mandate safety scans of the images in the dataset?
Not many existing models will do it or do it well.
> when you can do the fine tuning directly on the phone
Well you'd need millions of on/off nude samples to start with, so I don't think that's likely. And that's assuming we get a billionfold increase in mobile CPU performance or a billionfold decrease in fine tuning compute requirements.
You're reaching for things that, if they happen, are a decade or more out. It's ok to solve today's problems today. Perfect is the enemy of good.
LoRA training for recent Flux 2 Klein or Z Image Turbo models can be done on a consumer GPU in a few hours, often using datasets with fewer than a hundred images.
How much quality that gets you I don't know - perhaps the versions people publish are trained with larger datasets.
I know that one popular anime finetune of SDXL cost $180k in compute on 8xH100 for two months.
Teenagers aren't running generative models on their home PCs, they're using online tools. Sure, some people will do it offline, but these are by and large crimes of opportunity.
Horny Teenagers Find A Way. If the online tools were all magically shut down today, by tomorrow, they'd all be running local models on the local computer nerd's gaming GPU rig.
I doubt it. Photoshop was already pretty accessible, moreso than downloading sketchy models and getting them to run.
It's also a matter of scale. If a dozen teenage boys in a class are generating nudes, they feel safety in numbers. If it's only the weird computer kid who can do it, it's far easier to address and far more likely that it'll blow back on him.
Yeah, we've pretty successfully gotten rid of online child porn from like front page search results and public social media, there's no reason why we couldn't do the same for this.
The article isn't about content on the front page of search results. The content in question is being circulated privately through chats. The proposed solutions, so far, have been the broad, nasty, invasive content scanning that nobody on HN wants (for good reason).
What the hell kind of queries are you running? I use Claude Pro all the time for asking questions, doing data analysis, writing side projects, and I very rarely get rate limited.
I use Claude Max 20x at work and I rarely hit 10% session utilization, which implies even using Claude to write code all day only uses 2x the Pro token limit.
Are you just telling it to try again when you get a response you don't like?
> My grandfather rode a camel, my father rode a camel, I drive a Mercedes, my son drives a Land Rover, his son will drive a Land Rover, but his son will ride a camel.
They want to prolong the Land Rover phase as long as possible.
reply