Hacker Newsnew | past | comments | ask | show | jobs | submit | ericlamb89's commentslogin

> "The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t. The first group is shipping faster, taking on bigger challenges. The second group is… not."

Honest question for the engineers here. Have you seen this happening at your company? Are strong engineers falling behind when refusing to integrate AI into their workflow?


In my experience it's weak and inexperienced developers who gravitate to AI tools. Unfortunately they lack the domain knowledge to correctly evaluate the outcome of the AI tools they use. AI weaponizes them against their colleagues by enabling them to open more PRs and generate more text which may look reasonable at first but falls apart under serious review. Any gains I may get from AI myself is eaten away in this way.


No. The opposite. The people who “move faster” are literally just producing tech debt that they get a quick high five for, then months later we limp along still dealing with it.

A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.

I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.

It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.

We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.

It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.

AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.


This is exactly what I've seen. A perfect description. I am the tech lead for one part of a project. I review all PRs and don't let slop through, and there is a lot trying to get through. The other part of the project is getting worse by the day. Sometimes I peek into their PRs and feel a great sadness. There are daily issues and crashes. My repo has not had a bug in over a year.


As before, the big gap I still see is between engineers who set something up the right way and engineers who push code up without considering the bigger picture.

One nice change however is that you can guide the latter towards a total refactor during code review and it takes them a ~day instead of a ~week.


If anything I’ve learnt more because I’m having to go and find bugs in areas that I’m not super clued up on… yet ;)


Agree with OP that LLMs are a great tool for this use case. It's made possible because OP diligently created useful input data. Unfortunately OP's conclusion goes against the AI hype machine. If "consuming" is the "superpower" of AI, then the current level of investment/attention would not be justified.


I had the same initially confusing experience as the author, confusing "polish" for the language "Polish". The product manager should have used AI to get feedback on this feature...

I put the following into chatGPT: "I am a developer for an email client. I want to incorporate some AI features to help users write more polished emails. I'm thinking I will have an option that appears under the email body where a user can click "Polish". Can you see any issues with this UX?"

ChatGPT: There are a few potential issues or points of confusion with a button labeled “Polish”:

Ambiguity:

Some users might confuse “Polish” (as in “to make shiny or more refined”) with “Polish” (the language). If your product is used internationally or by non-native English speakers, that label can be unintentionally confusing. Even among English speakers, “Polish” may not be immediately clear as a writing enhancement feature—people might wonder, “What exactly does it do?” Lack of Context:

If users don’t already know what the feature does, they won’t know what to expect when they click “Polish.” You may need a tooltip, hover text, or short description like “Improve your text with AI suggestions” so users understand it’s about refinement, grammar, tone, etc. User Control:

How much will it change their text? Will it rewrite in a new style or just fix grammar? If it makes heavier changes, users might worry about losing their personal style or voice. Be clear about the scope of the changes. Trust & Consent:

Users may have privacy concerns if their email content is being sent to an AI for processing. Even if that’s already in your privacy policy, the feature’s label (just “Polish”) doesn’t convey what happens behind the scenes. You might consider adding a “What will happen to my text?” link or a discreet explanation about how data is handled. Alternative Wording:

Using a label like “Improve,” “Refine,” “Enhance,” or “Polish Text” might provide clearer expectations. You might also consider dynamic or contextual labeling like “Improve Grammar & Clarity.” How to address these concerns:

Add clarity in the UI: A short descriptive text or tooltip under the “Polish” button can briefly explain what the feature does: “AI-powered text improvements.” Offer user choice: Present a preview of the changes before they commit. Let them accept or reject individual suggestions so they feel in control. Signal the feature’s AI: Some users will want to know that an AI is involved, especially if they’re concerned about style changes or privacy. Labeling it as “Polish with AI” or something similar can help set expectations. Overall, the idea of a quick “one-click to refine” feature is appealing, but make sure the button’s purpose and functionality are crystal clear—both to avoid confusion and to build user trust.


I'm not sure it's the right thing to ban TikTok, but I'm not surprised people aren't protesting in the streets. My feeling is that most people are addicted to these apps, not using them for the utility or pleasure they bring. These users have a deep down sense that they'd be better off without these apps. Instagram won't be banned, but if it were I think there might be a similar non-reaction.


That doesn't characterize me or any of the folks I know who use the app.

I know what it feels like to be addicted- I used to drink quite heavily and often.

Why are you theorizing about people you don't know engaging in an activity which is foreign to you?

Can you state some of your media consumption preferences so we can deconstruct them and find their problematic aspects via a struggle session? If you're not up for that, why do you feel like it's okay to try and take that line with other adults?


Exactly. I know people who have TikTok as the lifeline of their business and they're rightfully pissed off that it's taken from them on a whim.


>My feeling is that most people are addicted to these apps, not using them for the utility or pleasure they bring.

What if most people aren't addicted to them, and do simply find them useful and entertaining, and the hyperbole and discourse about how dangerous and addictive social media and "algorithms" are is a means to manufacture consent for the government regulating and controlling free speech?

What if it's a bigger problem that all the government has to do is gesture vaguely in the direction of China and many people will just roll over?


> My feeling is that most people are addicted to these apps, not using them for the utility or pleasure they bring.

...I find it interesting to think about whether using something due to addiction means you wouldn't protest if it's forcibly taken away.

In other words, I wonder if people would protest if cigarettes were banned.


Not many people would protest a cigarette ban because the effects of smoking are very visible.

Not many people will protest a TikTok ban because it's largely happening to someone else and not very many people care about issues of free speech, as long as it's not happening to them.


If anything, I'd expect addicts to fight the hardest when someone takes their "drug" away


I think replacing cigarettes with vapes is more topical, at least in America.


Well, the USA did ban alcohol for a few years, and that was a rowdy time.


The smokers' rights movement is a thing, and they do protest, yes.


Check out the commoncog starter manager's guide: https://commoncog.com/g/starter-manager/


thanks will check this out!


I went to UPenn for engineering. I could do these things but not at a professional level. I would definitely not call them "standard". Almost everything I know how to do in excel I learned at work. Knowing how to send an email is different than knowing how to manage your entire working life in email / to dos. Presenting is similar. We had to present information often and I think I was strong at presenting coming out of college. However, many of the people I work with are very weak in one or more of these areas, so it seems like people aren't well prepared coming out of school (even those from top universities).


I haven't really seen these issues. With the basic understanding from school, it's not hard and doesn't take long to get to the professional level. Most of the intricacies are org specific anyways, so a class won't help with that.


My team uses zendesk and they don't hate it, but this is enough to make me consider the other options on their "top 10" list.


looks like both links have been taken down


As long as I am an expert in any and all disciplines, I will find success. Got it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: