I have a solution but I don't think companies care about this level of meta-analysis on how people work together. They think they do but in reality they just care about optics, and the status quo culture has a huge weight on continuing in the same direction, largely dictated by industry "standards".
In essence, estimates are useless. There should only be deadlines and the confidence of engineers of achieving the deadline. To the extent there are estimates, it should be an external observation on the part of PMs and POs based not only on the past but also on knowledge of how each team member performs. This of course only works if engineers are ONLY focusing on technical tasks, not creating tickets or doing planning. The main point of failure in an abstract sense is making people estimate or analyze their own work, which comes with a bias. This bias needs to be eliminated and at the same time you give engineers the opportunity to optimize their workflows and maximize their output.
TLDR, engineers should only focus on strictly technical because it allows to optimize within the domain, meanwhile other roles (whoever PM, PO or other) should be creating tasks, and estimating. Of course this doesn't work because there are hard biases in the industry they are hard to break.
This is one of the main reasons I'm trying to pivot away from a career inside a corporate environment. There is too much politics. I wish it was just do the work and go home, and get rewarded for the work that was completed, but instead there is a huge self-promotion (as in marketing) component. If that's what it takes I might as well do something that I own and control. If I'm going to need to worry about how to market my own work then I might as well try and at least not have a boss. I always thought the point of being an employee and having a limited paycheck meant that you don't worry about this things. That's the fair tradeoff.
There’s too much emphasis on career growth into leadership. I know so many programmers who simply want to solve the trickiest of technical problems, do good work they can feel proud of, and go home to their families. They want stability more than anything.
There are rare software companies where this is exactly what programmers do. The pay is lower than at FAANG & SV/LA/NYC startups, but work-life balance is great, stability is great, and most of all they get to just focus on doing great work. It's not about making quarterly goals, it's about stewarding (or perhaps gardening) a software project for many years. Engineers grow a lot from all the deep, focused feature work and problem solving.
I worked at such a place for 15 years. The downsides for me were lower pay, no equity, and not getting broad industry experience. I ended up leaving, and I now make a lot more money, but I do miss it.
Google lets people stay at L4 forever and Meta does at L5 with no expectation of further growth.
Yes the expectations are probably still higher, but these companies don’t expect everyone to grow past “mostly self-sufficient engineer” as the parent comment suggests, and for people that do want to do that there’s a full non-management path to director-equivalent IC levels. My impression is that small companies are more likely to treat management as a promotion rather than as a lateral move to a different track (whenever I hear “promoted to manager” I kinda shudder)
Depends on the team — managing can be quite a bit more scope than being a senior IC, depending on expectations for that role. You have broader ownership of technical outcomes over time, even aside from the extra responsibility for growing a team. Managers have all the responsibility of a senior engineer plus more. In that way manager feels to me like a clear promotion to me. Manager vs staff eng, maybe not though.
Management not being a promotion doesn’t mean that managers aren’t (usually—I’ve both been at equal and higher levels than my managers at times) higher levels than their reports. It means that switching to a management role from IC is never a promotion itself (ie always L6 -> M1 in Google/Meta levels) and it never comes with any difference in compensation.
I haven't been a manager, but my understanding is that the higher IC roles assume you're competent enough to do some management-like things if needed ("responsibility without control"), and I also assume that being a manager helps with compensation because they actually teach you how the review process works and let you into the calibration meetings.
the saddest thing is that it used to be possible to do it at at least some of the megacorps too. "senior engineer" (one level below staff) was widely accepted as an "I have reached as high as I want to in my career, and just want to work on interesting problems now", you would basically never get a raise other than cost-of-living but you could do your work and go home and live your life too. that's still doable to an extent but the recurring layoffs have added a measure of precarity to the whole situation so now you have to care more about all the self promotion and "being seen to be doing something" aspects of the job a lot more than you used to.
My raises never matched inflation but then my compensation is like 700k a year. I don't know whether my raise needs to match the cost of living increase.
When I last worked at a FAANG I was very clear on exactly what they had to pay me to put up with their bullshit (I happen to be independently wealthy). This kind of “nominal raise” below inflation actually meant my salary went down, so I quit.
How did you know you're independently wealthy enough to do this? I'm at a stage where I'm sort of there and getting more and more annoyed by large organizational BS, but I keep thinking "one more year is that much more of a buffer."
Funny enough, it happened because of the 2022 layoffs. I figured I'd be fine (and was), but it made me go though the math and realized I was close to escape velocity. On one hand, it made getting excited about uninteresting work that much harder, but because I wasn't quite there with enough buffer, the bad job market still gave me anxiety.
I’ve never known anyone to escape that situation. The “one more year” attitude is pervasive.
In my case it was easy because I didn’t join the company until after I was already independently wealthy, from an IPO (quitting that company was an easy decision too, due to all the magical changes that happen after IPO).
Shipping the frontend for features in a core product area on a large team, just like a lot of other devs here :)
To go into specifics of actual problems solved and do so intelligibly, I'd have to provide specific context, which I'm not comfortable doing here.
It's a lot easier to describe "interesting problems solved" using less identifiable (and more generally interesting) details if one is in platform/infra and/or operating at a Staff+ level -- both of which I have been in the past (and loved it), but am not at the moment.
I'm pretty sure no one is going to be hunting down NDA infractions on HN unless the poster is silly enough to give specifics about the workplace and time at which they solved the problem. If it takes some kind of investigative work to piece together the most basic details, I think that's within the terms of most NDAs anyway.
One of the last times I commented in a thread like this, someone looked at my profile (which has my real name), found me on LinkedIn, and then posted my employer's name in a reply to me, calling out an alleged conflict of interest (you can find it in my comment history and make a decision on that for yourself, if you're curious).
It's not worth the internet points for any of us to post details beyond what we do.
Google’s terminal level is one past new grad and it has a full parallel non-management IC track, I don’t think that they’re pushing people that hard into leadership roles.
That's precisely why programmers become programmers. It baffles me that tech careers put most on a leadership track when people study CS for many years for a reason. Why would I want to throw those technical skills away.
And then what happens when you are looking for your next job and you get a behavioral interview question and all you can say is “I pulled Jira tickets off the board for a decade”?
No, when I ask those questions I want to know how you think about your role and whether you take any ownership of anything or whether you just bumble through as the winds take you.
I recently spent 30 months trying to qualify for a promotion, earnestly striving to demonstrate that I could operate at the staff engineer level. I accepted literally every single opportunity, offered by my manager and director of engineering, to take on extra work and show that I'm staff caliber. They were both eventually persuaded that I was ready, so they authored an 18-page "promotion packet" touting my many accomplishments—the marketing component you describe. This document was then presented to an anonymous promotion committee who, after two weeks of consideration, rejected my promotion.
I am now channeling every ounce of my energy toward becoming my own boss. They have unwittingly started an unquenchable fire in me, and I eagerly await the day I can tender my resignation and tell them just how much they didn't deserve me.
To be fair, this issue isn’t endemic only to big companies. I’ve seen similar even in academia, some people just know how to “play the game” and play it very well.
It really depends on the culture of where you are, which can even vary team by team in the same org.
Some people seem willfully ignorant of the game. When confronted with the reality of it, they turn away, complain that it exists, and act like a bullied middle schooler.
You don’t have to enthusiastically endorse the game. You can learn it, just like you learn Go or Rust or whatever. You can refuse to actively play it, but also be aware of it enough to avoid getting hurt by it.
E.g. figure out the minimal effort for convincing game players that your work is important.
I so wish every new starter in professional working life is given a 101 on literally this. I've seen so many talented people over the years think they're above taking part in the corporate optics and then subsequently get bogged down by resentment watching lesser people getting pats on the back.
The system is crap. It should all be about meritocracy and all that but it's not. It is what it is. People need to stop being naive shooting themselves in the foot
EDIT
The other thing to it is that its so infuriating because people who say they wish the game wasn't the game and genuinely could change things because they have the right sensibilities, and are talented enough to rise up to position where they can make decisions that matter, choose not to engage in the game to make that happen. Wake up, you're letting the "wrong people" (in your view) win.
The problem here, at least I think, is you may be very unaware of the expectations of running ones own business. There are far more politics, more being cutthroat, tons of regulations you must be aware of that come with potential later penalties if you are not, legal threats, and more.
I can see that, but then what's really broken is the education system. If what you say is true that means there is no such thing as being a specialist, at least not anymore, yet almost all universities train people to be specialists. Either industry should stop looking at academic degrees completely or schools should start teaching business first, and technical knowledge second, for most degrees (with exception of academia and research).
My brother is studying economics right now, and he said everyone could use some basic economics knowledge, because getting an intuition for how markets work really helps you as you're looking for jobs and navigating around companies. Maybe business knowledge is better, but I'm personally biased towards the empiricism of economics :) You're onto something though about the need for awareness of how companies think and work.
I'm not convinced there is a missing part. It's hard to summarize but to use a metaphor, it's like wanting a self-driving car that crashes sometimes to go faster and faster. The missing piece is just the AI figuring out what it needs to figure out. In theory we're heading towards every single tool becoming obsolete as AI can understand and do more... but that's in theory of course. If scaling LLM transformers does not deliver, then yes perhaps some better tooling will help, but I still think it has some fundamental flaws that in the end are not saving that much time when all the mistakes and corrections are factored in.
Creating new (really good looking) fonts must be brutal
Many symbols are collections of other symbols raised and placed more or less on a grid. You could cover quite a lot with not that many created glyphs. Now, actually making it good quality and properly adjusted... that could take years...
Love Strudel, trying to learn it but inevitably you also need some musical foundation. It's a fascinating blend of specialties. Also I found AI is complete garbage at generating Strudel.
Here is my weak attempt at Beethoven:
This is hard to explain succinctly but the method of how each individual works should be more or less a black box to the company. Moving away from this comes increasingly at the expense at employee privacy and should obviously result in more leverage for employers, further tilting the already broken and imbalanced relationship. This kind of practice should be illegal.
> Our systematic review of 445 benchmarks reveals prevalent gaps that undermine the construct validity needed to accurately measure
targeted phenomena
Intelligence has an element of creativity, and as such the true measurement would be on metrics related to novelty, meaning tasks that have very little resemblance to any other existing task. Otherwise it's hard to parse out whether it's solving problems based on pattern recognition instead of actual reasoning and understanding. In other words, "memorizing" 1000 of the same type of problem, and solving #1001 of that type is not as impressive as solving a novel problem that has never been seen before.
Of course this presents challenges to creating the tests because you have to avoid however many petabytes of training data these systems are trained with. That's where some of the illusion of intelligence arises from (illusion not because it's artificial, since there's no reason to think the brain algorithms cannot be recreated in software).
In my opinion a major weakness in how people reason about this issue is that they describe solving problems as EITHER recall of an existing solution OR creative problem solving. Sure it is possible for a specific solution to be recalled, but it's not possible for a problem to be absolutely unrelated to anything the system has ever seen before and still be solvable. There are many shades of gray in the similarity a problem may have to previously seen problems. In fact, I expect that there are as many shades of gray as there are problems.
The difference is that humans don't memorize petabytes of problems, so from a relative perspective people are constantly solving novel problems they never saw before. I'm thinking this is a requirement for dynamic, few-shot learning. We can clearly see LLMs fail when you throw even a small wrench in the prompt.
Humans encounter massive numbers of problems in their experience that informs their problem solving. The same is true of LLM. LLMs do not actually have all their training data memorized.
I’m not sure what your basis is for saying “LLMs fail if there is a small wrench in the prompt.” They also succeed despite wrenches in the prompt with great regularity.
Let's clarify, this isn't about whether the models are capable. They are very capable and impressive. This is more about whether we can use the same type of metric we use for humans to compare and conclude if they are "intelligent".
It's not just semantics, the metrics are supposed to tell us the potential of the model. If they can solve extremely hard PhD problems, it should be the case that we're already in the singularity, and they should be solving absolutely everything in whatever field they were trained in, because it's not just PhD level, it's a machine that has a ton of memory, compute and never sleeps. However, once you use these models extensively, it becomes apparent they are just synthesizing data, and not as much understanding it in a way that would allow them to extrapolate into anything else as humans do.
I think this point is a little hard to explain. I'll just emphasize, these are smart systems, and they can do a lot, but there is still a disconnect between, let's say, a PhD level model and a human with a PhD, in the "quality" of what we would call "intelligence" of both entities (human and machine).
Human metrics of intelligence have always felt like rubbish. We never did this well. I would describe intelligence as effective adaption leading to survival and growth or prospering. Memorization, comprehension, speed of response etc. those are magnifying factors that are valued, we view them as components of intelligence, but llms are proving this is not the whole, without effective application, they are not intelligence. Perhaps learning is the difference? How to measure that?
Someone describing string theory is the literary equivalent of fractal structures in snowflakes. Lovely, complex, possibly unique, but not proof of a level of intelligence- for the string theorist maybe it is intelligent, perhaps persuading someone to fund their grant, which enables them to eat, shelter etc. Might be a bit harsh on string theory. Saying it is proof of an amount of intelligence leads us to falsifiable statements.