While my friend was offered a bump with the promotion to EM, the total compensation was less than the offers he received for Senior/Staff Engineer at other startups.
"At other startups" is the important bit here. I assume 'startup' means less than 100 people, at which point it switches to 'scaleup', but whatever definition you use the question should really be 'When does an EM actually start being useful?'
Startups don't really need EMs because they don't really need managers at all. There is a strong expectation in a startup that the staff there are capable of managing themselves, plus there are usually fewer business functions that developers need to work with. Where an EM is useful is in a larger business that has many competing demands of engineering teams: features, roadmaps, BAU, KTLO, tech debt, compliance, etc ... it's a long list. There's a necessity for someone to manage that, and support the teams to be doing the right work at the right time, without letting standards slide, and to support the team to navigate and negotiate the political maze of multiple stakeholders wanting their thing to be priority #1.
I've been an EM for a few years and I would not recommend someone takes an EM role in any company that has less than 70-100 staff (assuming about 1/3 is engineering). Once you get to that scale I think the role starts getting interesting, and valued, but if the company is smaller it should be managing those processes fairly easily already. If it isn't then that's a signal there's problems with the way the business works that an EM probably can't solve because they're rooted in the business's leadership culture.
There's been a lot of "the world doesn't work the way I want it to" on HN recently. I suspect this is a function of an aging readership more than anything particularly groundbreaking about hot takes on the up and coming tech.
"Anything invented after you're thirty-five is against the natural order of things." Douglas Adams
You act as if the internet was like a high society book club - all the previous articles were written by ivy league grads.
I recall geocities, angelfire, all the chans.
The internet has always been a cesspool with little islands of quality floating in a proverbial sewage of human output. In theory AI slop will improve.
A racist, sexist, ignorant online community of humans 20 years ago, if it is still active, is almost certainly still a racist, sexist, and ignorant community today.
Being able to name especially egregious forums is the point. AI slop isn't worse than preceding slop, but it is more widespread, partly because it's more socially acceptable than racism, sexism, and ignorance, and partly because it's harder to identify.
Similarly, email spam that is easy to automatically categorize is not a problem.
Making slop less sloppy makes the problem worse, not better. You could claim that that's only up to a threshold, but there's a pretty strong information theoretic argument against that.
I am making no claims about slop - I think that saying, "AI slop is going to ruin the internet" is something that itself requires further clarification.
I'm assuming you are advocating for AI to "go away" or be banned or something of that nature - that is most definitely not a valid argument.
AI doesnt do anything. People set the AI on a task. People have every right to ruin the internet however they see fit (within legal realms) and I dont even think you are actually upset about the actually more "unleashed AI" that post comments and participate in chats with specific agenda - you are annoyed with the websites that are mostly AI content...
The AI didnt make the website, select the topic prompt, and paste that onto the page -> a person did that. Your actually upset at people for not posting content up to your standards - which people have been saying the entire time the internet has been a public thing.
I honestly do not understand what part of this whole process, and AI content in general, appears so empowering for this.
Your argument is essentially akin to "people don't kill people, guns do" and all artuments framed this way, operate under an assumption that they are like some arbitor of quality - and simply by stating "AI Slop" it makes it so.
AI slop is ruining the current internet, including forums, email, blogs, announcements, and much of the remaining content. I say "current internet" because we will adapt as we always have, but many things that were formerly useful or interesting will be buried in so much crap that it will stop being something that people use the internet for.
At the dawn of email, I could and did cold email professors, and they would respond based on whether my query was worth responding to. I put effort into my messages (and had a reason, I wasn't just trying to elicit responses), and my success rate was very high. It wasn't scale that killed that, it was spam and greed. (There's overlap, but by spam I mean unsolicited commercial email, and by greed I mean people blasting out large number of low-effort messages in an attempt to gain something.) Professors are still interested in meaningful correspondence, but email is no longer a usable communication medium unless they already know their correspondent.
AI applies the same dynamic to many more forms of content. Individually, it doesn't do much harm. In aggregate, the meaning and value are rapidly being destroyed.
It's kind of ironic -- in the early days of online communication, there was endless hand-wringing over all the cues and subtext that we've lost from face-to-face communication. Now we take that loss as a given, and have collectively decided to attenuate the signal even more.
I wouldn't advocate for AI to just go away in all domains. It's a cool and useful technology. But I personally would prefer if representing AI output as your own writing were looked upon roughly the same way as having a secretary write all of your correspondence. Well, a little worse -- it's like have an arbitrarily chosen secretary from a worldwide pool write each item of correspondence. If I ruled the internet, that's where I would set social norms and expectations. People could still use it for translation, but it would be a major faux pas to not divulge your use of AI if there is reason to believe you wrote it yourself. Sure, there would have to be many judgement calls -- if you get an AI's advice on how to say something and then reprocess it into your own words, for me that'd depend on how real that reprocessing is. But that's nothing new, it's just another form of the plagiarism slippery slope.
Sadly, I do not rule the internet, and it's a lost cause.
Whether it's the person using AI or AI itself that is responsible? That's a non-sequitur. I don't care. Describe it how you like. I'm describing the effect, not assigning blame.
I have extensively used AI - its not as capable as you think it is. I frequently run into hard limits of its ability - I understand what recursive means in the sense of an AI, I can see it folding into itself pieces of this and that of what I've said or has been discussed to create the appearance of depth, growth or progress - none of that is real. The AI does not change.
I use AI as feedback - but only after setting almost 50 variables/conditions for that feedback, because AI is an automatic sycophant 100% - but it doesnt have to be that.
I occasionally use AI to transfer what I am saying to a person, into words that don't offend them - as I have absolutely no patience for people's insecurities when I find myself in a position where I need to teach them something, which happens often.
Let me be very clear - you are not capable of identifying AI content any longer, nobody is.
I extensively tested that by having a broad conversation with some of the smartest people on a platform (on earth in general really) whom all have very real credentials - I engaged with two sides of the AI coin regarding AI being self-aware or not, which is actually being debated, by some of the smartest people.
Half of my comments, I ran thru AI - or just completely generated from a prompt - my most liked comment was not mine - liked by people whose professional occupations is literally AI.
I'm sure this disturbs you - that an AI can create a Wikipedia page with more accuracy, better quality of writing, and in a more engaging way than 99% of human people - that is our actual reality tho.
Now all those little chat bots running around the internet, low level AI - they are creating slop, in exactly the same places and ways that humans do, their very words are modeled after the words people have literally written.
So, an AI can create a 100% perfectly written article for a major publication - and then AI can also fill the comments on that "perfect" article with absolute garbage - very similar to how things have always functioned online.
You need to interact with AI more , so you actually understand it and are not afraid of it, or imagining it with more ability than it has, or giving it human agency - AI is literally not capable of having agency at all.
Right now, there are tens of millions of millennials who are functionally identical to Boomers with smartphones.
You can't prevent AI from changing every aspect of human life - nobody can. You can be the boomers who refuses to adopt a smartphone - they all have smartphones now.
And might be the 1st time in my life that I was strawmanned, with an accusation of pulling a strawman - thats pretty fantastic actually, I'll give you that.
Otherwise I wrote a book on the other comment on this - you should check that out.
Honestly, I'm not good enough at distinguishing between AI and human-written content to know. I don't want to read 'bad writing' for sure, but that's rules out a lot more than AI. I also believe that the human (hopefully) reading and accepting the output of AI before putting it on the internet is more responsible for 'AI slop' more than the AI is, because the human-side author should be checking what they publish, so I don't really need to know. If I read someone's post and don't like it I won't go back to their blog again. If I read it and I do like it, I will go back. Whether or not they're using AI is essentially irrelevant to me.
Fortunately for us all HN does that curation for us. High quality blog posts from well-written and interesting blogs like simonw's posts get posted here a lot. I can't tell if he uses AI to help write them but given his deep work on AI topics I'd be surprised if he doesn't.
Plus, I strongly suspect that AI content is improving at a pace that means most people won't be able to tell in a few years, especially once tools to easily fine tune a model on a corpus of your own text are simple to use.
I'm experimenting with building an agent swarm to take a very large existing app that's been built over the past two decades (internal to the company I work for) and reverse engineer documentation from the code so I can then use that documentation as the basis for my teams to refactor big chunks of old-no-longer-owned-by-anyone features and to build new features using AI better. The initial work to just build a large-scale understanding of exactly what we actually run in prod is a massively parallelizable task that should be a good fit for some documentation writing agents. Early days but so far my experiments seem to be working out.
Obviously no users will see a benefit directly but I reckon it'll speed up delivery of code a lot.
Maybe it’s just me, but I feel that same kind of treachery when somebody tries to pass off a piece of AI-generated work as if it were their own voice.
There's a flaw in the Milli Vanilli argument. The band had no input into their songs. They 'performed' them by lip-syncing on stage, but all of the music and lyrics were someone elses. Milli Vanilli had no part in the creative process.
That's not technically true of AI content. There's some tiny little seed of a creative starting point in the form of a prompt needed for AI. When someone makes something with Claude or Nano Banana it's based on their idea, with their prompt, and their taste selecting whether the output is an acceptable artefact of what they wanted to make. I don't think you can just disregard that. They might not have wielded the IDE or camera or whatever, and you might believe that prompting and selecting which output you like has no value, but you can't claim there's no input or creativity required from the author. There is.
I also believe the Milli Vanilli argument to be flawed, but the other way around: music videos were all the rage back then and the two supposed singers were actually just performers for the cameras. Does this mean they had no part in the success of the music? I don't believe that.
That's not to say they were right in misleading the public and their fans, but it seems to me that Milli Vanilli was a fruitful combination of the public-facing performers and the musical process behind them. Everyone is fine with ghostwriters, why is this so different? The entertainment industry is fake through and through, but nobody is actually taking offense from this fact.
I often wondered if a similar project could find success if it were presented differently, as a cooperation of musicians and performers
actually, i am not fine with ghostwriters. i am fine with speechwriters, because public speeches are a shared product, so is music. performing someone else's music is normal and probably has been done ever since music existed. in that sense i would also not mind a performance of a song if that song was originally created using AI. if the song and the performance are good that it is no different from performing a traditional song. if you don't like AI music, that's fine, i also don't like every traditional song, but that's not the fault of the performer (beyond choosing a song you don't like). the problem with milli vanilli is that they violated the expectation that we know who the singers are. milli vanilli were dancers, not singers, and if that had been properly communicated, it would have been fine.
I'd challenge that assertion. LLMs still produce very bad results with greenfield work, so that seed was generated by people who had both creativity and skill a thousand times before. Having a glimmer of an idea that you've probably seen more or less intact somewhere else and getting an AI to take it from that point is much closer to Milli Vanilli than any actual creative work.
Good artists borrow; great artists steal. LLMs synthesizing previous works doesn't mean they're meritless, because humans do the same thing.
AI is like a camera. Photographs can be art, but they aren't always. If you prompt an LLM only with "write me a novel," you can't take credit for the result, any more than if you took a cell-phone snap of the Mona Lisa. But AI can be used intentionally to create art, same as a carefully planned portrait photoshoot is art.
Art isn't a binary. Art is a spectrum. A creation is art to the extent that it reflects an artist's unique vision, not because of the tools the artist used (or didn't use.)
> hey chatgpt give me a snarky response to this comment that would wittily refute the argument, make it funny and interesting, concise and to the point
Ah yes, the “tiny little seed” defense — because if I hum three notes and Quincy Jones writes the symphony, clearly we co-composed it.
Sure, prompting involves taste and direction. So does ordering at a restaurant. But if I tell the chef “spicy, but make it fusion” and then Instagram the plate as my culinary creation, I’m not suddenly Gordon Ramsay.
Nobody’s saying there’s zero input. We’re saying input isn’t authorship. A seed isn’t a forest — and picking your favorite output isn’t the same as growing it.
That reference though! I never thought I'd ever read on HN a blog mentioning Girl you know it's true from Milli Vanilli. Wow the throwback in (exceptionally cheesy) time.
Yeah sorry, I can claim there's none, you can't stop me.
I could claim there's even less creativity than lip syncing, and I will.
And if there was any creativity, the use it is being put to is to do violence to artists. If you think you deserve someone else's work as your own, you better be prepared for the fact that you won't even really understand who you're ripping off, but someone else sure as shit will and they're going to be pissed as hell.
At least Milli themselves a) knew what they were doing, b) paid the real singers I presume and c) presented real art created by real people.
But still everyone was mad at the lie of it, at being asked to venerate an imposter. And being asked to believe that in the future "impostor" will be the most venerable role. No. Just no.
This seems to really reason through only the happy path, ignoring bad actors, and there'll always be bad actors.
True, but the bad actors can defeat any security mechanism you put in place with a proxy, or a copy'n'paste, so the downside risk is pointless worrying about. The upside of allowing traffic is that your content that you presumably want people to read can be read by more people. For all but the most popular blogs that's probably a net benefit.
That site says my 24GB M4 Pro has 8GB of VRAM. Browsers can't really detect system parameters. The Device Memory API 'anonymizes' the value returned to stop browser fingerprinting shenannigans. Interesting site, but you'll need to configure it manually for it to be accurate.
Conceptually this is very similar to the question of whether or not you should squash your commits. To the point that it's really the same question.
If you think you should squash commits, then you're only really interested in the final code change. The history of how the dev got there can go in the bin.
If you don't think you should squash commits then you're interested in being able to look back at the journey that got the dev to the final code change.
Both approaches are valid for different reasons but they're a source of long and furious debate on every team I've been on. Whether or not you should be keeping a history of your AI sessions alongside the code could be useful for debugging (less code debugging, more thought process debugging) but the 'prefer squash' developers usually prefer to look the existing code rather than the history of changes to steer it back on course, so why would they start looking at AI sessions if they don't look at commits?
All that said, your AI's memory could easily be stored and managed somewhere separately to the repo history, and in a way that makes it more easily accessible to the LLM you choose, so probably not.
I've generally been in the squash camp but it's more out of a sense of wanting a "clean" and bisectable repo history. In a word where git (and git forges) could show me atomic merge commits but also let me seamlessly fan those out to show the internal history and iteration and maybe stuff like llm sessions, I'd be into that.
And yes, it's my understanding that mercurial and fossil do actually do more of this than git does, but I haven't actually worked on any projects using those so I can't comment.
This only works if the software is still crafted by a human and merely using AI as a tool. In that case the use of AI is similar to using editor macros or test-driven development. I don't need to see that process playing out in real time.
It's less clear to me if the software isn't crafted by a human at all, though. In that case I would prefer to see the prompt.
I agree that fully agentic development will change things, but I don't know how. I'm still very much in the human-in-the-loop phase of AI where I want to understand and verify that it's not done anything silly. I care far more about the code that I'm deploying than the prompt that got me there and probably will for a long time. So will my prodsec team.
Appreciate this very sane take. The actual code always is more important than the intentions, and this is basically tautological.
When dealing with a particularly subtle / nuanced issue, knowing the intentions is still invaluable, but this is usually rare. How often AI code runs you into these issues is currently unclear, and constantly changing (and how often such issue are actually crucial depends heavily on the domain).
I think this is the right analogy, contrary to some other very poor ones in this thread. Yes, it is rare to really look at commit messages, but it can be invaluable in some cases.
With vibe-coding, you risk having no documentation at all for the reasoning (AI comments and tests can be degenerate / useless), but the prompts, at bare minimum, reveal something about the reasoning / motivation.
Whether this needs to be in git or not is a side issue, but there is benefit to having this available.
That wouldn't be very portable. A benefit of committing to your history is that it lives with the code no matter where the code or the AI service you use goes.
That's true. I was thinking of it as being more like how LFS works since presumably the LLM contexts could be large and you wouldn't necessarily want all of them on every clone.
It had absolutely no trouble understanding what it is, and deobfuscated it perfectly in on it's first attempt. It's not the cleverest obfuscation (https://codebeautify.org/javascript-obfuscator) but I'm still moderately impressed.
I’ve used AI for some reverse engineering and I’ve noticed the same thing. It’s generally great at breaking obfuscation or understanding raw decompilation.
It’s terrible at confirming prior work, if I label something incorrectly it will use that as if it was gospel.
Having a very clean function with lots of comments and well named functions with a lot of detail that does something completely different will trip it up very easily.
Yeah even the "basic" free tier Gemini 3.1 thinking model can easily unscramble that. It's impressive, but after all it's the very precise kind of job an LLM is great at - iteratively apply small transformations on text
It's genuinely amazing how good they are at reverse engineering.
I have a silly side project that ended up involving the decompilation of a toaster ovens firmware, the firmware of the programmer for said toaster ovens MCU, and the host side programming software. They were able to rip through them without a problem, didn't even have ghidra setup, they just made their own tools in python.
If this is what AI ends up as it'll be fine. Those ads are easy to detect and block. The danger is if the ads are far less intrusive, so an advertiser bidding on which hosting company to use would be the only result mentioned in the text, or the examples are all pointing to one player, or worst of all, examples for more than one but only the bidder's examples actually work properly. An ad-nerfed AI could be made to hallucinate non-working example code for anyone that isn't paying. Done subtly that could shift marketshare from an incumbent to a new service very quickly.
We've seen this happen on Google's results pages. Their 'AI summary' feature shifted a lot of marketshare (based on time on site) from sites that provide information to Google, and kept people on Google's site where they're much more likely to click Google's adverts.
And InTune needs user's to put Edge on their devices, which means a significant set of users will just give up being able to open links on their phone from work apps. That puts a drag on productivity which is definitely more costly than 20€ per month.
"At other startups" is the important bit here. I assume 'startup' means less than 100 people, at which point it switches to 'scaleup', but whatever definition you use the question should really be 'When does an EM actually start being useful?'
Startups don't really need EMs because they don't really need managers at all. There is a strong expectation in a startup that the staff there are capable of managing themselves, plus there are usually fewer business functions that developers need to work with. Where an EM is useful is in a larger business that has many competing demands of engineering teams: features, roadmaps, BAU, KTLO, tech debt, compliance, etc ... it's a long list. There's a necessity for someone to manage that, and support the teams to be doing the right work at the right time, without letting standards slide, and to support the team to navigate and negotiate the political maze of multiple stakeholders wanting their thing to be priority #1.
I've been an EM for a few years and I would not recommend someone takes an EM role in any company that has less than 70-100 staff (assuming about 1/3 is engineering). Once you get to that scale I think the role starts getting interesting, and valued, but if the company is smaller it should be managing those processes fairly easily already. If it isn't then that's a signal there's problems with the way the business works that an EM probably can't solve because they're rooted in the business's leadership culture.
reply