Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A good measurement culture where numbers don’t replace common sense (promaton.com)
490 points by atorok on Aug 22, 2023 | hide | past | favorite | 456 comments


In the last engineering team I managed (by far the most successful one up to date), we discarded most velocity measures in favour of a simple solution: every Friday, each team sends a brief "here's what we delivered this week" email to the whole company. It contains some screenshots and instructions like "agents can now update the foobar rate from their mobile app without having to call in". After a month or two of these going out like clockwork, it gave both management and business stakeholders a level of comfort that no amount of KPI dashboards ever could. KPIs compress away details that stakeholders need to form an understanding. A full blast from the task tracker is overkill. This solution worked for us. But of course, required a solid feature flag practice so that half-built features (story slices) could be released weekly for testing without affecting production. That said, we did maintain a quarter-level sum-of-t-shirt-size based KPI.


I’ve watched this backfire, too.

It works great when everyone is delivering day-long or week-long incremental features that lend themselves to nice screenshots.

But then you slowly start accumulating a backlog of difficult tasks that everyone avoids because they won’t sound good for the week. Technical debt accumulates as people cut corners in the interest of getting their weekly presentations out.

You can theoretically avoid it with enough trust, but the trigger for our collapse was a round of layoffs where the people with the most visible features were spared while people working on important backend work and bug fixes got cut. After that, it was game over.


Without an outstanding company culture, most KPIs are almost useless.

While working at a big corporation we had a velocity initiative supposedly aimed to lead the company toward continuous integration.

"How long a PR stays open" was one of the KPIs in a dashboard.

I said: "Be careful with that!"

People started to close PRs and reopen new PRs with the same code.

Middle managers and sometimes the person in the division that was the point of contact for the velocity initiative were asking to do that.

The script measuring this KPI was improved to look at the branch name and the code diff. Result? People changing branch name and a change of EOL encoding in the new PR.

Learnings? B and C players with questionable ethics screw companies quite rapidly.

In this climate KPIs and aligning them with company values is futile.


When I worked at a FAANG ~15 years ago, a new VP came in and heard, correctly, that our group didn't have enough automated tests. He created a requirement that every developer commit two new tests to the code base every workday. He had automation put in to monitor our compliance.

Within a couple of weeks, scripts were circulating to auto-generate and auto-commit tests. If the JVM we were using had a bug in adding random numbers together, we'd have known about it very quickly.

That's about the time I decided I should move on. I'm glad I did. The company I joined treated developers like adults and developers acted like adults. And we had great automated test coverage.

Obligatory mention: https://en.wikipedia.org/wiki/Goodhart%27s_law


Accountability is a big part of leadership, though. The anecdote basically just says the VP used the wrong tool for accountability, not that "adults" shouldn't be held accountable.


I think what was missing here was lack of developer buy-in into the actual changes implemented, and making sure that they were reasonable and sustainable. Sounds to me like a blanket mandate was handed out without buy-in and also wasn't achievable, and the developers felt they had to work around it to get their real jobs done.


I agree. At the very least, it seems like he didn't communicate the "why". It certainly wasn't "to make as many tests as possible". I suspect the developers knew that, which is why is also a little unprofessional on their part to treat it like that was the goal.


I think the bigger problem here was inappropriate expectations: "two tests a day". That's not a reasonable way to increase test coverage. And so the developers quite reasonably just tried to minimise the time spent on it.


>Number of unit tests isn't the best proxy for that goal…

However, I don’t think mindlessly creating tests is acting in good faith. It’s bordering on malicious compliance. I doubt they were thinking they can just knock out that metric so they can otherwise create better test coverage. (The OP conceded their coverage wasn’t good). Better employees would work to create a better understanding/goal. All of that points to some cultural problems.


Malicious compliance is exactly what you get when people think your target is stupid and you put some automated, non-negotiable measurement in place.

If I was a developer there, I'd totally started adding those generated tests. I have a job to do (ship products) and you put in some stupid requirements which actually interfere with it (since my time is limited I can either work on tests or my daily tasks like developing new features or fixing bugs), but we both know that if my primary job suffers, I'll pay for it. So the best solution, from my point of view, is the one that takes that requirement away and lets me go on actually working.

When we had a similar problem in a previous company, we just created an epic, assigned a couple of people to it and have them churning out tickets for specific improvements, like "Class X has a coverage of Y on this method, add tests for the missing execution branches", which were clear, non-generic and fully integrated in our flow. If anybody complained about our velocity or whatever, we could show them the full trail which lead us to choose to work on that ticket and how much we spent on it. The coverage issue got solved in less than a month.


>If I was a developer there, I'd totally started adding those generated tests.

If I were a manager there, I probably wouldn’t hire you. You want people who are actively solving problems that matter, not just automatons beating their chest about how stupid everyone else is while they add to existing problems.


I you were a manager there and would approve that metric, no worries, I'd manage to fly it past you.

Any smart manager, having to deal with this kind of policy, would either push back or approve any way to game it and then get the work done. But I doubt that a company which allows this kind of BS can retain smart managers for more than a couple of months.


If you read any of my posts on this thread, it’s pretty obvious I don’t agree with the managers approach. But I also don’t agree that the devs are doing the right thing either.

It’s telling that you show such dichotomous and defensive thinking.


So you disapprove of the decision, but would have enforced it in the most asinine way at the expense of actual productivity.

I just hope to never work with you and, especially, never use anything you helped building.


What made you draw that conclusion?

I made the point that the metric used (raw number of tests) is a bad way to enforce a good goal (higher code quality). I also made the point that metrics aren’t inherently bad, but they have to be chosen judiciously.

You seemed to take those two points and extrapolate an entirely different story as a personal affront and apply motives that, frankly, reads as a bit unhinged. So i also hope we don’t cross paths, because in safety critical code development where I come from where bad practices and toxic teammates can kill people.


If I was a dev there, I certainly wouldn't be approving those PRs either.

But if the devs at the org all (or even substantially) tend towards malicious compliance, that's a sign that something has gone wrong, relations-wise


Agree. That’s what I mean when I say it’s more indicative of cultural issues.


Would adult developers not have made sure they had enough automated tests?


It probably depends whether they had been given specific instructions that directly conflicted with that.


And it also depends on whether they're given a timeslot for that and/or incentivized for it. Automated tests need times to maintain, especially if they're some changes on flow / specs, tests need to follow.

I scrap all incentivized metrics when working on something urgent (important and soon), which is often the case. If the metrics somehow incentivized, we'll start gaming it.

Now if a dev has part on production support and automated tests are purposed to reduce the workload for support then it has time slots for that, I bet everyone will start doing it.


If a civil engineer is given specific instructions to make an unsafe bridge, that engineer does not obey those instructions. I think adult developers should behave similarly.


The analogy starts to break down. Most software is less mission-critical than the structural integrity of a bridge. Meaning the consequences for failure are probably annoyance rather than risk of death. Not always. I would imagine the software for nuclear reactor or aircraft controls is in a different category, but most software is not bridges.


Okay, but if developers decide that then at what point do you say it's okay to not treat them like adults?


I’ve often wondered what is the solution to Goodhart’s law. Obviously a business can’t abandon metrics. Perhaps this is where qualitative management skills come into play - humans in the loop making good decisions instead of blindly marching to the output of an automated reporting system.


I'd argue that the point Goodhart's law isn't that metrics are bad, but that all metrics aren't created equal. In the case above, it seems like the real goal was improved code quality. Number of unit tests isn't the best proxy for that goal, so it wasn't the best choice of metric. You don't want developers creating tests for the sake of creating tests. (There's some irony here in that it's not the type of behavior I'd ascribe to professionals). The "solution" is a metric that's a better measure of what you actually want.


Even professionals have limits. I've worked in companies with the kind of management which kept adding this bad proxy metrics and pushing initiatives which had a totally expectable bad effects on the product quality. Most devs used to fight the management on this, but grew progressively tired of this continuous fight. At some point the experienced devs either left or just gave up and started giving the management what they asked. Us juniors followed suit. The management was happy, the actual workload diminished because we let go of "low priority" tasks and we even go a juicy bonus at the end of the year because of how good we were doing.

The company tanked six months after that, now it doesn't exist anymore.

There's only so much you can do when the management is hellbent on doing stupid things.


You might be misconstruing the point. I certainly wasn’t insinuating more and more metrics. If anything, it’s the opposite: a core understanding of what’s really important helps you focus on the few metrics that matter.

In that context, I’m not really sure what point you’re making, unless it’s just to share a personal anecdote. Are you implying that management shouldn’t have any quantitative measures and should only be qualitative?


You need good quantitative measures, not just random numbers.

If you sell, say, water bottles, you probably want to know how many of them you can sell at any given moment, in order to not overbook and have to reimburse people. In this case, keeping track of how many water bottles you do have in stock probably helps, keeping track of how many labels with funny jokes you can stick on a shipping box in an hour doesn't. But if you start tracking the latter and handing down bonuses and layoffs based on it, people will max that metrics out - at the expense of your actual stock capacity.

Quantitative measures are dangerous, especially in the hands of people who believe they are better than qualitative ones because they're "objective" or whatever. Because not only they aren't, but they are also better than qualitative ones at hiding their biases and soothing your own.

> Are you implying that management shouldn’t have any quantitative measures and should only be qualitative?

Many managers would do a lot better this way. They'd still make stuff up, but would at least be forced to admit it.


You realize that you just reiterated the same point I made in the OP, right?


Goodhart's law is always in effect. It can't be solved because it's not a problem, it's a fact of nature with annoying implications. It's the echo of the observation that efficiency is fitness as its consequences ripple from the lowest levels of reality through systems made of people.

You can make an engine as effective as the laws of physics let you, but you can't solve the limits of thermodynamics. You can only do your best within them. Same with this, for maybe the same reason.


In my experience, there is no good solution long term except changing the metrics themselves every so often. When new metrics come in, as long as they are not totally boneheaded, they improve things. Then people start learn to start gaming them and some of the more sociopathic folks start doing so, then others copy by example and soon enough the metric is useless at best or detrimental at worst and it is time to move to a new metric.

It helps to have someone with a hacker mindset think about the metric being designed so the obvious ways in which it could be games are taken care of and their own metrics/incentives are aligned with the company goal.


If you didn't have enough tests you weren't acting like adults. Why should he treat you like adults?


Maybe you are jumping to a conclusion too quickly.

How do you know what was really going on?

"He created a requirement that every developer commit two new tests to the code base every workday" seem a stupid requirement if you do not control the quality of the tests.

The same big corporation I wrote above had a goal of 80% code coverage reported on dashboards.

I saw people writing tests just to run lines of code, without effectively testing anything.

Others people were "smarter" and completely excluded folders and modules in the codebase with low coverage from coverage testing.

Code coverage percentage numbers on a dashboard are a risky business. They can give you a false sense of confidence. Because you can have 100% code coverage and be plagued by multitude of bugs if you do not test what the code is supposed to do.

Code coverage helps to see where you have untested code and if it is very low (ex: less 50%) tells you that you need more tests. An high code coverage percentage is desirable but should not be a target.

The real problem is again the culture.

A culture where it is ok to have critical parts of the code not being tested. A large part of the solution here is helping people to understand the consequences of low code coverage. For example collecting experiences and during retrospectives point out where tests saved the day or how a test may have saved the day so people can see how test may save them a lot of frustration.

But again, when you give people a target and it is the only thing they care about, people find a way to hit it.


He said it himself.

> When I worked at a FAANG ~15 years ago, a new VP came in and heard, correctly, that our group didn't have enough automated tests.


There are a thousand reasons why reasonable people might have found themselves in that position. Maybe they inherited a code base after an acquisition or from some outside consultancy who didn't do a great job. Maybe management made a rational business decision to ship something that would make enough money to keep the company going and knowingly took on the tech debt that they would then have some chance of fixing before the company failed. Maybe it actually had very high numbers from coverage tools but then someone realised that a relatively complex part of the code still wasn't being tested very thoroughly.

If a team has identified a weakness in testing and transparently reported it, presumably with the intention of making it better, then why would we assume that setting arbitrary targets based on some metric with no direct connection to the real problem would help them do that?


had to mention https://fs.blog/chestertons-fence/

if team does not have automated test, but still manages to deliver working software - maybe tests are not adding as much value as VP thinks?

the most important is feature delivery, and integration test, not automated unit test where you test getters and setters with mock dependencies - absolutely useless busywork


Tests aren't exclusively about asserting current behavior -- they also help you determine drift over time and explicitly mention the implicit invariants that people are assuming.


Chesterson's fence isn't saying that the fence/test isn't necessary. It's saying you need to take the time to understand the broader context rather than take a knee-jerk assumption. To be more clear, just because developers don't see the need for better testing, doesn't mean more testing isn't needed. But it may indicate the VP didn't doing a good job of relating why, which leads to the gamesmanship shown in the story.

Schedule isn't always the most important thing either. It's possible delivery the software may just mean you've been rolling the dice and getting lucky. The Boeing 737MAX scenario gives a concrete example of where delivery was paramount. It's a cognitive bias to assume that "since nothing bad has happened yet, it must mean it's good practice"


This might be relevant if the original comment didn't say "correctly".

Also "not testing a lot" is not a chesterton's fence. "not testing a lot" can't be load-bearing.


Without tests and instrumentation you won't even know if it's not working.


> The real problem is again the culture.

The culture lead to fake tests instead of adding tests that were legitimately lacking.

Is that so different from saying they weren't acting like adults?

The phrasing could be called dismissive but I give that a pass because it was mimicking the phrasing from the post it replied to. The underlying sentiment doesn't seem wrong to me.


> Code coverage percentage numbers on a dashboard are a risky business. They can give you a false sense of confidence. Because you can have 100% code coverage and be plagued by multitude of bugs if you do not test what the code is supposed to do.

Or, conversly, I've been in charge by really awkwardly testable code.. which ends up being really reliable. Plugin loading, config loading (this was before you spring boot'ed everything in java). We had almost no tests in that context, because testing the edge cases there would've been a real mess.

But at the same time, if we messed up, no dev environment and no test environment would work anymore at all, and we would know. Very quickly. From a lot of sides, with a lot of anger in there. So we were fine.


Need some tests for those tests!


Insufficient test coverage doesn't necessarily mean lack of self-discipline. It can also stem from project management issues (too much focus on features/too little time given for test writing).


An adult would have started to write the tests themselves so they'd understand what was going on around them. You don't just frown at people and hope for the best.


Yep, I worked at a place that was focused on everyone completing 100% of their jira tickets each sprint, just to get the metrics up. You didn't have to actually finish, it just had to look like you did to the bean counters.

If end of sprint came and you weren't done, the manager would close out the ticket, then reopen another similar one named "Module phase 2" or something similar for next sprint. One guy was an expert at gaming the system, and his ticket got closed and opened anew for about 3 or 4 sprints.


> Learnings? B and C players with questionable ethics screw companies quite rapidly.

No one should be surprised when employees respond to incentives, and blaming them seems a clear indicator of managerial failure: failure to tend to morale, failure to reward actually useful behavior, failure to articulate a vision.


> Without an outstanding company culture, most KPIs are almost useless.

Also, with an outstanding company culture, KPIs aren't really necessary.

So, when would they be useful?

I'm not as negative on KPIs as the previous line suggests though. They can be useful to shape direction when used carefully. But don't make them too long-lived, discard and create new ones as soon as they become gameable.


Fire those folks and move on. If you're a subordinate and your leaders are not firing those folks, quit and move on.

Gameable KPIs offer windows into the souls of your colleagues.


Some examples of incrementally delivering tech debt and communicating it to stakeholders:

"This week we found the root cause of why operation X sometimes fails - it's a slow database query which we plan to fix next week. For those interested, here's a command-line demo of the issue."

"With 60 pull requests being submitted per week, and each one triggering a 12 minute automated test run, we were wasting a quite a bit of developer resources. This week we brought that time down to 4 minutes. For those interested, here's how we parallelised the tests."

"Remember how the last accounting feature broke a lot of different parts of the system and it took 4 extra weeks to fix? This week we migrated 4 out of 18 core accounting functions to a service completely separate from our main code base. Once the remaining 14 are moved over during the next three weeks, accounting features can be built and tested independently without affecting the main system."


This is why management needs to be technical. Otherwise you just end up with feature, feature, feature … death.


I find management only really ends up being a problem if they get too deeply involved in the contents of technical quality work. If they're limited to budget allocation it's actually better to have them involved.

I almost always tell management that by default I'm spending, say, 30% of my time on technical quality and ask if they'd like to dial it up or down temporarily because of a holiday or a deadline they can. I will track this and show it to whomever asks (e.g. their boss and boss's boss might be interested if they've explicitly asked for 8 straight weeks of 0% work on quality).


What falls under technical quality label?


Refactoring, CI maintenance, test automation maintenance, that sort of thing - everything except user visible features or bugs, essentially.


This is why management needs to be competent. Average CEO is 58 years old. Their first jobs were in the late 1980s.

The economics and tactics around technology has been revolutionized a dozen times over in the last 4 decades. Now, maybe a few rare individuals have kept up, but most likely, they all rely on outdated strategies from out of touch MBA programs & buzz words.

Its the same reason the market kills public technology companies' innovation and they rely on acquisition.

Its like gunpowder has just been invented and leadership still wants large formations marching against each other. "what if we invested in medical training and add washing our hands so we can keep more people alive?" "nah, more guns and marching"


Funnily enough, the best managers I've had are all in the late 40s - mid 50s that all did 20 years in the "trenches" and decided that management was the new path forward for them as they started to prefer people work to technical work.

I think its work experience that matters most with management. You need folks who are people oriented but understand the job that the folks they manage are doing, and the challenges that come with it, both obvious and non obvious, and can communicate the importance of such work to the broader organization


There was an article posted here a month or so ago about how this guy was seeing a new class of worker emerge who both understood the necessity of absorbing knowledge from other domains and enjoyed the process. It reminded me of a dev I worked with who'd go on smoke breaks with the art department and then change data structures we'd been using forever.


I was under the assumption this type of worker is what dominated before the industrial revolution constrained people to a small domain in a process. Some may call it artisan, dilettante, etc. Maybe the narrow-domain specialist was the anomaly and not the rule when we expand our lens across history.


I assume these "smoke breaks with the art department" involved more than just tobacco.... :D


> The economics and tactics around technology has been revolutionized a dozen times over in the last 4 decades. Now, maybe a few rare individuals have kept up, but most likely, they all rely on outdated strategies from out of touch MBA programs & buzz words.

Ah, the cult of youth. It is cute, when it isn't killing startups with rookie moves.

It almost sounds like you think management is something like practicing law - like there's a set of rules that are revised on a schedule and they need CLE credits to keep them current.


> Its like gunpowder has just been invented and leadership still wants large formations marching against each other.

Incidentally, marching large formations against each other has been the dominant strategy for most of Gunpowder's history. Only in the last 150 years did guns become lethal enough to make this a bad idea.


>Its the same reason the market kills public technology companies' innovation and they rely on acquisition.

The counter-theory is that as companies mature, they have to devote a disproportionate amount of time to maintenance. From that perspective, it's a risk-based approach to maintain the products and services that already have a market and instead effectively out-source the high risk operations of innovation. New companies don't have comparatively much to maintain, so they can focus on innovation. It's not quite such an either-or, but a balancing act. It's just easier to balance when you can outsource risk.

>Average CEO is 58 years old. Their first jobs were in the late 1980s.

I believe there's some neuroscience theory that may illuminate this. When we're young, our intelligence is more plastic and can more readily innovate. But as we get older it becomes more crystalline. What we may lose some of that novelty generation, we can gain in understanding the greater levels of experience. Experience is necessary for putting ideas in their appropriate context. I would argue that understanding context is more important at high, strategic-level positions.


You had me with competence, but you lost me with blatant ageism


Maybe it's just me, but basically all the best senior managers I've had have been in their 50s and with plenty of experience under their belt. I've had one or two in their late 60s who where starting to lose touch with the field, but most managers in their 50s I've found have kept up pretty well.


Your gunpowder and large formations metaphor doesn't hold. People aren't idiots, on the whole. Do a little more military history research.


Particular people can be utter cretins, they've crashed the world economy in 2008 and nearly caused global nicl3ar war by sheer stupidity:

> the Russian nuclear sub B-59, which had been running submerged for days, was cornered by 11 US destroyers and the aircraft carrier USS Randolph. The US ships began dropping depth charges around the sub.

Officers aboard had every reason to believe that their American counterparts were trying to sink them.

Cut off from outside contact, buffeted by depth charges, the most obvious conclusion for the officers of B-59 was that global war had already begun.

The submarine was armed with 10-kiloton nuclear torpedos and its officers had permission from their superiors to launch it without confirmation from Moscow.

Two out of three serior officers onboard agreed to launch, but unanimous vote was required.


I did say, "on the whole."

I was specifically thinking of the idea that large formations being idiocy after the invention of gunpowder. Brett Devereaux's 4-part series "The Universal Warrior" had an excellent discussion of why a unit of soldiers might fight in large formation. https://acoup.blog/2021/02/12/collections-the-universal-warr... He's verbose, but thoroughly enjoyable.


Reminds me of the ill-fated cavalry charges in WW2. Many older officers insisted that "the fundamentals of war are the same" despite new technology.

I can imagine a modern tech leader living in those times and giving the orders to charge the German panzer formation, sabers overhead.


This says more about the kind of places you've worked than anything else.


Even technical management can succumb to this. Agreed with most of the replies to your comment.


Technical and smart. Otherwise you end up with fix tech debt, fix tech debt, fix tech debt ...death.


I have also seen refactor, refactor, refactor, oh actually it's now no better than before. Whoops!

It's like imaginary tech debt.


Or even worse than before - and the sad part is that no-one intended that to happen. All bespoke software is effectively an experiment that may fail. I've become a lot more wary of the "failure is good" mantra that SV is famous for - it's not good in a zero-sum game where labor is spent on stuff that gives no value, and refactoring for DX is definitely expensive but only possibly valuable.


Here is the thing, in a high-trust environment, KPIs work fine too. In fact, almost everything work on that condition.

But modern organizations are quick to destroy trust on any whim.


A hallmark of strong high trust environments I've found is that management beyond the team (or squad, or pod, or whatever you call it) is only concerned with Milestones, not sprints or other velocity metrics, and the check-ins with them revolve around the progress on the given Milestone(s) the team is working on.

It is up to the team to decide on the best way to approach this, and what works best for that team, and the team is free to do the work as they see it, with the only requirement that if something does negatively affect the Milestone(s), it gets raised quickly and early, in case of re-adjustment.

This however, means:

- Product management has to be competent enough to present a relatively elastic vision that is not so concrete its essentially a waterfall, but not so vague that its unclear whats being built. Wireframes are usually a good signal here.

- Engineering Management has to be competent enough to communicate (or allow others to communicate) technical challenges that may be involved, and more importantly, what may be unknown, to Product management

- Everyone has to agree that demoing work is more important than talking about work, whenever possible

- Trust in everyone doing the right thing needs to be high, constant interference and meetings will kill this from working right.


The place I'm consulting for at the moment shut down offices for years due to the pandemic. Some people moved out of state and continued to work because there was no signal of ever returning to the office. Then last week they said everyone had to return to the office 3 days per week or be fired with no exceptions. We're losing an MVP on our team due to this. He now lives over 500 miles from the closest office.


The way around this is to open an office just for them. It sounds dumb, but it meets the business requirement, and could also drive further recruitment around their location.


It seems like it'd be way more sensible and efficient to just make an exception for that particular employee (and maybe some other important ones). Though, in my opinion, the best option is to just let anyone work remotely as much as they want. Especially if you go years without ever indicating you intend to return people to the office.

I think I'm luckily grandfathered in due to working fully remotely for years before the pandemic, but if that happened to me I would absolutely start looking for a new job.


That can be more expensive than it appears at first, especially if it's in a different state due to taxes.

I've asked before if I can work remotely for a few weeks at a time and have been told yes, as long as I don't leave the state. In some places, just moving to a different county or city can trigger different tax requirements.


I don't think expense is the driving force here. It would be even less expensive to simply let the worker work remotely. The reason a company wouldn't do this is that it would send an unacceptable message about worker power. If some workers could get exceptions to a new unpopular rule, then why can't everyone? And if everyone could, why have the rule at all? This does not serve the interests of the company's power structure.


If you have competent accountants/payroll, this is no more complicated than anything else. There might be a learning curve to get things set up properly, but the rest is reasonably normal.


It can be quite complicated.

> State corporate or other business activity taxes can apply, if even a single employee is working in a state. In effect, if an employer did not previously have a recognized office in a state, but one employee starts working from there, this can trigger entirely new registration requirements and tax liabilities. It may be necessary to register with the secretary of state and relevant tax authorities, provide a registered agent address, and pay corporate and business activity taxes, sales taxes and employment taxes, including employee withholding. There are often state and local licenses and business permits as well.

https://www.adp.com/spark/articles/2022/06/implications-of-w...


Sounds like business as usual for most accountants that do SMBs. Worse case scenario you do this 50 times. The same thing happens in the EU when an employee works in another country, only worse. Sounds like you’ve got it pretty easy compared to there.


I think you underestimate just how small most businesses are. In the US, something like 90% of businesses have fewer than 25 employees. Dealing with setting up in another state is a significant burden.


I don’t think you understand how this works. At some point, you will surpass the sales tax threshold[1] for most states. Then you’ll be reporting and filing taxes in that state anyway. Adding an employee in a state is not complicated by that point.

So sure, if you’re a company doing only local sales. Or a company doing less than 3-400k of revenue a year, this is probably complicated. Once you hit ~1m in revenue, there is probably AT LEAST one other state you are paying taxes to. Adding states should already be a defined process (or being defined) and an employee moving there is only a one or two lines different.

1: https://salestax247.com/sales-tax-thresholds-by-state


In the premise Consultant32452 provided, the employees had already moved and been working from different states, so presumably all of that would already have had to be done.


A lot of states made changes during the pandemic to allow remote workers. Those have all been ending over the past year and I assumed that’s what they were talking about.


What? I do not see how any state could not allow remote workers.

And all states have long required employers to register with the state government for tax purposes if they employ someone working in the state, as well as comply with the state’s labor laws.


I didn’t say that right. What I meant to say was that a lot of states allowed remote workers to live there without having to pay local taxes.


They tried this one of my client's office too, but the middle-management all stood firm that if they and their teams (I'm on one of the teams) would not be returning. And to fire even one person on any of the teams would mean certain death for the company since it only has a handful of developers and everyone has 7-14 years of in built knowledge. I am the most recent consultant hired, and I was hired in 2017.

They back peddled pretty quickly and switched it to anyone within 10 miles of the office has to come back. There is only 1 person that close. He's the "office mom" and he's been in basically every day since the pandemic started.

Even if half the teams hadn't moved further away, there is no way most of us would have gone back to commuting 2-4 hours per day.


That seems rather shortsighted to just assume offices won't reopen instead of actually negotiating a transition to fully remote.


I've found the same.

We used goals and velocity metrics on the highest performing team I worked on. This was also a high trust team that happily raised concerns and adjusted priorities/velocity/etc.

The goals and velocity were still extremely useful for getting everybody on the same page for what we were looking to accomplish and how long it would take. We needed to land in the general area, but never got caught up in meaningless drivel over metrics.

The problem is management wants consistency and expects an explanation when things change. I've found it a team is perceived as failing if they're actually realistic with goals and capabilities.


Yeah, it's like the way in a competent team, just about any software development methodology will work well. The hard bit (honestly, the impossible bit) is making things work adequately with a mediocre team and mediocre leadership.


When you give people a target and it is the ONLY thing they care about, people find a way to hit it.

So be careful what you measure and you reward.

For every company cash flow and profits are undoubtedly the most important metric. It is almost impossible to argue that maximize those numbers should NOT be a goal.

At the same time when that becomes the ONLY target that matters, the consequences are dreadful.

At least in US, that is how we ended up with appliances that only last a small fraction of time that used to last 40 years ago.

And even worse it is how we ended up with the food industry creating more and more addicting food resulting in 70% of the population be obese. And it is how we ended up with a heath system that costs multiples of what costs in any other country in the world, that, instead of healing people for good, make them "less sick" addicting them to a few pills for the rest of their life. Because there is no money to be made with a healthy person.

When you build KPIs, make sure you "think a few moves ahead" and you put other correcting metrics and checks in place. At least make sure who establishes the metrics has a way to become aware of the possible shortcomings and plan corrections in a timely manner.


> For every company cash flow and profits are undoubtedly the most important metric. It is almost impossible to argue that maximize those numbers should NOT be a goal.

Except for VC backed startups. Actually, there are a lot of exceptions. But those mostly revolve around caveats surrounding riskiness and timelines.


There are two types of devs at my job, those that care about shipping lots of new things fast, using new frameworks and pushing metrics. Then we have me and a few others fighting fires that the fast shippers leave behind. I’m fine with it, I enjoy analyzing database queries and figuring out what to index. As long as manager understands where the fires are coming from and why we need to spend time putting them out there’s no problem.


Unfortunately, at far too many companies, those "ship new things fast" developers are the ones who get promoted, get rewarded with the best projects, and grow their careers and influence, while the firefighters just keep fighting fires.


I'm curious if anyone has stories about making this work in the long run.

It seems like such a natural division of labor that it appears everywhere. But I also feel like I've never seen a company explicitly optimize processes around it.

So I'm curious to hear any battle stories.


The problem that I'd imagine is a variation of "Why does Sales get paid so much?". Basically, the more obvious the connection between your contribution and the company making more money, the easier it is to reward you with a slice of the pie.

In a codified "trailblazers and firefighters" model, the trailblazers are much more obviously tied to the company's bottom line. Feature X is required to close deal with company Y, this software team built this feature. Meanwhile, the people frantically cleaning up the mess the first team left is only nebulously connected to the company raking in more cash, relegating them to a second-tier position despite having harder work.


Yes, our team asks for demos every sprint which I find to be dumb and a remnant of every sprint needing a MVP when in practice that is just hilariously naive. Especially considering a screen that took 10 minutes to make is considered much more interesting than code that is more detailed really doesn’t lend itself to a “demo.”


>You can theoretically avoid it with enough trust

I think it ultimately comes down to human values. What's important to the founders and the team? Do they have a clear articulation of those values? KPIs are useful if they're grounded in values that the organization absolutely won't compromise on; KPIs should be a means to an end, not the end themselves.

Whether those values are "we want to make the most money" or "we want to make customers happy" or "we want a sustainable lifestyle for our team," KPIs will only help you pursue them, not define or prioritize them.


Nothing in OPs comment says it should only be completed stuff that goes into the status.

Also, if your team is optimising for the status report, your manager has already failed you.


Aurornis didn't say that. He specifically said "because they won’t sound good for the week".

With time and experience, you'll learn that those who can quickly churn out features and bug fixes are deemed extremely valuable to the company.

Let's say we have a wicked bug that has very little chance of happening. But if it does, it'll bankrupt the company, no questions asked. Spending 3 weeks fixing it is nowhere as impressive to the company as Joe who's churned out 10 features per week while your status updates are "Hunting for the wicked bug", even if you describe it it more details.


You are describing a situation where management has failed. No amount of better practices can help in a situation where your management chain cannot evaluate your work correctly.


Even if the detail was something like, "if this wasn't fixed, we were at the risk of losing half our customers" ?


> Even if the detail was something like, "if this wasn't fixed, we were at the risk of losing half our customers" ?

Yes, because this one task appears as a single line item for the manager's manager. They don't care for the complexity or consequences until the fire actually happens. And if the fire happens, they just blame engineers.

If the industry incentives were changed such that manager heads would roll for mass outages, managers would start appreciating big fixes.


If you spend more than a few hours looking for a bug, you should start writing code to go around the bug completely. Even if it results in technical debt, if the company can go under if the bug isn’t fixed, you’d be stupid NOT to charge that TD card.

That’s why devs who understand that survive layoffs and the ones who spend three weeks “hunting for an extinction level bug” don’t.


> if your team is optimising for the status report

If you're providing a status report, you're optimizing for the status report. Period.


Eh, my personal experience has been that just as often we have a mandatory status report that nobody cares about (including my direct and skip level managers). It's just something that needs to be done for contract compliance.


I admire the confidence with which you hold this opinion.

Engineers are not providing status reports. The manager is collating them. How is it any different to a sprint summary?


Yes. The manager has failed you. Doesn't mean you should put your ass on the line to fix things for the business.

That's why these kind of incentive structures are dangerous and why things like OKRs put such heavy emphasis on regular, company wide, failure. (I.e. they punish 100success rates)


This industry already has a huge problem with prioritizing shiny new features over fixing bugs, improving performance, verifying correctness, etc. This seems like a great way to enshrine that into your company culture.

Team A: Added a wizzbazz button that says "Wizz!" when you push it with a cool noise! Look at this cool screenshot!

Team B: Worked on fixing an elusive bug causing rare data corruption, but couldn't figure out what was causing it.

Team A: Re-colored the header to the CEO's favorite color! Look at this cool screenshot!

Team B: Fixed bug causing rare data corruption. Started looking into strange performance bottlenecks in UI.

Team A: Added spiffy animations that play every time the user goes to the next page. Watch this cool video!

Team B: Improved page responsiveness issue introduced by the wizzbazz button. Look at this graph; pay attention to the ninetieth-percentile time-to-first-render (dotted blue line) for dashboard users. It does actually go down quite a bit if you look.

Team A: Added a thousand lines of code to the fluxborg module! Look at this graph! It went from FIFTY lines of code to a THOUSAND lines of code! Look at how much that is on this graph!

Team B: Removed two thousand unnecessary lines of code from the wizzbazz module. We decided not to show a graph because the line goes down and we know you'll all think that's a bad thing.

Which team is getting the axe when it comes time to lay off employees? You know. Come on. You know it's Team B. You know it's true.


This is accurate. My director is obsessed with visible status updates aka UI changes or metrics. So anyone reporting, "fixed a 3 year old bug by refactoring old libraries, that improves maintenance" just doesn't cut it for him.

And unfortunately for engineers, they have to dance to this director's negligence even if it comes at the cost of their own sanity.


Corporate dynamics make shiny object raise up even if it fool's gold.

The real gold is in improving what in bad need of improvement.

The companies that stay afloat are often those that are abundant in people that know what really matter and find a way to do it regardless what the middle management thinks.


> The companies that stay afloat are often those that are abundant in people that know what really matter and find a way to do it regardless what the middle management thinks.

Which is hard to come by, because those people aren't rewarded.


Think you perfectly described teams working on Windows 11.


KPIs encompass more than just team performance; they also encompass the performance of the product itself. Even if you excel at delivering, a poorly performing product resulting from flawed ideas renders delivery insignificant.


In the above model, product performance could be easily addressed, too, as well as demonstrating that the features were data driven, it doesn't need to be a boring feature list.

For each delivered feature, there could be an overview, e.g.

"Users can now print reports easier. On our user request tracking platform, this issue had 321 upvotes, and 5 Premium clients requested this as well. We placed the print button THERE because amongst the three designs, this performed 23% better according to METRIC. For more info on the experiments, see link or ask on #mobile-team. The feature was released for two weeks behind a feature flag, it performed great, and we made it available for all since last Monday. 5%, of users who visited the report page used this feature and we generated already 5000 PDF reports. Big client A already complimented the feature, and it unblocked the sales process with Client B."

At the end of each email, there could be an analytics overview of the product with overall useful metrics, significant changes etc.

This format also helps with people being on holidays, sick, etc.


During my tenure as a product manager at Rakuten, I consistently grounded the initiation and culmination of all product endeavors in data. This approach dictated that prior to embarking on any project modifications, the presence of data was imperative to validate (or refute) the underlying hypotheses. This mean that if we just have an hypothesis and no data, we first add hooks to understand the situation throught data (ex. GA events, etc.).

For instance, when executives contested the efficacy of our registration form for X reasons, it became incumbent upon us to ascertain its current standing. This involved computing the ratio of registrations to site visits and conducting an in-depth analysis of the errors that users encountered while interacting with the form. We achieved this by integrating Google Analytics events with error messages, among other techniques. We provided first a picture, then we proceeded to make the changes.

This systematic approach enabled us to gain a comprehensive understanding of the situation, facilitating the implementation of purposeful changes. Subsequently, we gauged the impact of these alterations using key performance indicators (KPIs) such as the registration-to-visits ratio, and the previous events tracking changes too to understand if we really made a difference.


What about things like "Stuck on XYZ because no one fixes it because it doesn't show any good metrics and management doesn't care about it. So I fixed it at the expense of my own time."


>What about things like "Stuck on XYZ because no one fixes it because it doesn't show any good metrics and management doesn't care about it.

I'm confused by your comment. How did you decide this was something worthy of fixing if it "doesn't show any good metrics"? If you can't quantify the issue in any manner, how do you determine it's worth doing?


Generally, using one's brain.

This is especially important when metrics would not be expected to be available -- for example, if you're designing a nuclear reactor, you need to think hard about ways to prevent a meltdown in advance, rather than collecting meltdown statistics and then fixing the piping problems that correlated with the most nuclear meltdowns.

This is also necessary when the true metric that matters is very hard to evaluate counterfactually. For example, perhaps your real task is "maximize profit for the company", but you can't actually evaluate how your actions have influenced that metric, even though you can see the number going up and down.

And necessary as well when a goal is too abstract to directly capture by metrics, resulting in bad surrogate metrics: for example, "improve user experience" is hard to measure directly, so "increase time spent interacting with website" might be measured as a substitute, with predictable outcomes that bad UI design can force users to waste more time on a page trying to find what they came for.

All of these problems are faced by metric designers, who need to pick directly-measurable metric B (UX design metric) in order to maximize metric A (long-term profits) that the shareholders actually care about, but they cannot evaluate the quality of their own metrics by a metric, for the same reason that they were not using metric A directly to begin with.

(See also the McNamara fallacy, which parent comment is a splendid example of: https://en.m.wikipedia.org/wiki/McNamara_fallacy )


There is also the classic story about returning war planes. You can try to make sense of the damage, the bullet holes, and try to create strategies around how to improve affected areas. The problem is that the damage you actually want to inspect and prevent is on the planes that did not make it back, the ones you do not have an abundance of data on.


The classic war planes story was about Abraham Wald - https://en.wikipedia.org/wiki/Abraham_Wald


Let's say you've got a logging system that sometimes drops lines. And this sometimes makes debugging things hard, because you can't say whether that log line is missing because the code didn't run, or because the log line was lost.

Impact on end users? Nothing measurable. Impact on developers? Frustrating and slows them down, but by how much? It's impossible to say. How often does it happen? Well, difficult to count what isn't there. Would fixing the issue lead to a measurable increase in stories completed per week, or lines of code written, or employee retention? Probably not, as those are very noisy measures.

Nonetheless, that is not a fault I would tolerate or ignore.


> I'm confused by your comment. How did you decide this was something worthy of fixing if it "doesn't show any good metrics"? If you can't quantify the issue in any manner, how do you determine it's worth doing?

Because sometimes some things without metrics are incidental to the actual thing you set out to do.

E.g. a large refactor that switches libraries which is necessary for your new service that give 10% lower latency. But that library refactor will need to be done and it will take 2 months.


This is a challenge often encountered by product owners. At times, what appears to be a problem may not actually be one. Conversely, you might discover that a perceived issue holds greater significance than the current circumstances or priorities suggest.


KPI should do that. However it is very hard to measure the useful things in advance. The real measure is the bottom line after the product is released until it stops production. If you have an apprentice plumber cleaning out clogged drains it is easy as you work for a couple hours and then you get a check a month later. (this is a reasonable example of something easy to measure, but I doubt any plumbing company ever operated this way). However a lot of software is on a very long release cycle with hundreds of other developers. Even if you are Google and deploy to production often it can still be a long time before you start work on some feature and it is actually useful to customers, and then you are constantly enhancing things for the next version so that is even more noise making it hard to measure what has a useful impact.


Measuring is not difficult, unless you are attempting to measure something immeasurable. The appropriate sequence would be as follows:

1. Determine whether there is supporting data for the proposed change. 2. If no data is available, implement tracking to establish a reference point. 3. Measure the reference point and ascertain what could lead to success. 4. Implement the change. 5. Review the new data and compare it against past values.


Sure, but the feedback loops for what I want to measure are 10 years long. You can go from fresh out of school to senior engineer in that time if we can measure your contributions faster than 10 years.


But I am not talking about measuring people's performance, but product performance.


This sort of practice killed me as a platform engineer when it was an org-wide expectation that we do these weekly demos of what a feature looks like to a user. Sure, let me just screen record a new environment buildout or a suite of applications running for ten hours and you can observe how often a problem occurs and what happens in response. Even better when an activity for a week is something like we upgraded a storage driver across all environments. What does a user see? Nothing, but the old one had bugs, including at least one CVE. Now we don't.


Lean into it.

"What does a user see? Nothing, but the old one had bugs, including at least one CVE. Now we don't."

That seems like a reasonable content for demo and with just the right amount of self-deprecating humor it can be a slightly entertaining 2 minute presentation.


The smartest folks will recognize "User sees no change" as a good thing. Keeping things stable is extremely important.

Unfortunately, too few people think that way.


This seems perfect to me.

It reminds me of deviantArt tech dev meetings when I worked there (~10 years ago)

Every Monday, each tech/product team did a small demo for the whole tech org (maybe 50 people, growing to nearly 100 by the time I left).

Teams without an interactive demo would just put together a web page with a few pictures and text describing what they accomplished and team lead would present it to the whole org.

Teams competed for dubious accolades like most lines of code deleted, most embarrassing feature, best meme. Prizes were arbitrarily awarded by the VP of technology in the form of credits to buy art from their prints shop.

Similarly to what you describe, this practice relied heavily on feature flags enabling us to release features for testing while they were still in early stages of development. This worked out really well for getting feedback and QA testing early on, while also keeping everyone up to date about everything that was happening outside of our immediate area of focus. It was fun and motivating as well. deviantArt did a lot of things really right though back in the early 2010s. IMO it was a really incredible engineering org and probably the best job I ever had.


That optimizes for Demo-driven development though. I've seen a lot of flashy "clever" stuff created for demos which were later totally abandoned. And a lot of basement-level code wrangling that is difficult to demo.


How did the team report out things that weren’t new features? Performance improvements and bug fixes are often more important than adding more stuff to the product.


I had a similar report structure, and generally we would relay the same information. "Here's a graph of measurements from before and after the code change, showing a 20% reduction in resources saving us $X/year".

Generally that was good enough.


I like this approach with a graph. In the past I’ve run into people ignoring text bullets about the “boring” development wins.


Is there some reason you can't report on performance improvements and bug fixes?


Because a shiny new feature that takes 1/10th of the time gets 10x more praise and attention.

Because naive senior vice president MBAs believe there never should have been bugs in the first place, and every fix is an admission that the bug existed at all.

Because every email is now a weekly contest between teams, and when hard times come the ones who reported on performance improvements and bug fixes will lose their jobs long before the ones who made the header the president's favorite color.

Because every email is your team trying to convince the company of your own worth and it's much harder to show a pretty screenshot of a performance improvement than a new feature (Graphs of before/after performance every week? Every week?)

Because bugs don't always follow your bullshit schedule made by a bullshit manager who is looking for a pat on the back by replacing bullshit A with bullshit B, instead of anyone anywhere in the company ever just trusting anyone.


>Is there some reason you can't report on performance improvements and bug fixes?

Exactly. What does "performance improvement" even mean if you aren't measuring performance before and after?


Not GP but what about:

- Improved p50 latency of endpoint /very/important/endpoint by 22% [attached mini-graph showing the change]

- Fixed this bug that had been reported by 37 clients in the last 30 days as seldomly affecting them

etc


E.g. "Foobar page for lists larger than 1000 now load in under 3 seconds, down from 10 earlier", "You no longer have to log out and login again when we update config Y", "Now when you report money mismatches, you no longer have to send screen shots (we have added logs)" etc.


I like how the most convincing examples of "agile transformations" almost never use the word agile.


In those cases (the majority) the word Agile is used to describe it. Not actually agile, just whatever the hell Agile is.

https://www.youtube.com/watch?v=a-BOSpxYJ9M


> KPIs compress away details that stakeholders need to form an understanding.

Comms. It always boils down to comms. Specifically, comms that drive understanding.

KPIs = knowledge.

But what you did, increased understanding.

And what I like to say is: "(Knowledge isn't power.) Understanding Is Power."


But doesn’t knowledge imply understanding? Otherwise it would just be facts without context.


KPIs are facts. Facts without context don't lead to understanding.


IMO KPIs are metrics to optimize (if they are objective). A set of values is already encoded in them by the specific formula being chosen.


At the extreme, watch Jeopardy. People spitting out their knowledge. Unfortunately, 99 out of 100 times, they have very little *understanding* of the answers they offer.

Knowing the dots doesn't mean you can connect them. And thus, "Understanding Is Power."


I totally get what you mean here, but I don’t know if Jeopardy is a great example. I think of that like trivia which is a skill that you have to practice and exercise. Rote memorization can increase knowledge and definitely helps. (Here comes the but :) ) But I don’t consider someone who has a head full of facts(only) to be knowledgeable. They have to be able to talk about the concepts and theory to back up the facts. The very act of connecting the dots demonstrates knowledge.

Anyway I think we are probably saying the same thing and discussing semantics.(and to be fair it is probably just me:) )


I agree. And that is why I began with "At the extreme..." :)


Was your sum KPI burned down on a report during the quarter or was it only reviewed once the quarter was complete?


I did keep an eye on it because it was a good indicator of problems. But I never drove the team based on it, any more than I would try to increase a car's speed by grabbing the speedometer needle.


> every Friday, each team sends a brief "here's what we delivered this week" email to the whole company

I am not sure this scales very well with company size. ;)


It doesn't. I've recently had to add about 30 mail filters to keep all the weekly/bi-weekly area newsletters trumpeting every random "milestone" for every team out of my main inbox.

When it comes time for me to move on from the company, one of my antics is going to be to reply-all to each and every one of them with the word "unsubscribe."


They're brief emails with a few lines and screenshots that can be read in under 2 minutes. Most stakeholders will read through emails from teams developing features for their business area, and skim the rest. They can also opt out entirely if they want to. But then they lose the right to complain that they didn't know what features are being shipped and when, or how to use them correctly (remember: the mail works as a "how to" too).


It would be too much in large software companies. There are thousands of teams and several products have absolutely no relevance to each other. So such a kind of self-inflicted spam attack would not event remotely work.


Have each squad send the email to the platoon, and each platoon send a summarized email to the company.


Depending on company culture that can lead to uniformly very positive feedback on the lower levels (amazing new features, developers show excellent performance, all of them), which does not seem to fit the mess the top management can see at their level (customers complaining, sales dropping).


"the Ministry of Plenty's forecast had estimated the output of boots for the quarter at 145 million pairs. The actual output was given as sixty-two millions. Winston, however, in rewriting the forecast, marked the figure down to fifty-seven millions, so as to allow for the usual claim that the quota had been overfulfilled. In any case, sixty-two millions was no nearer the truth than fifty-seven millions, or than 145 millions. Very likely no boots had been produced at all. Likelier still, nobody knew how many had been produced, much less cared. All one knew was that every quarter astronomical numbers of boots were produced on paper, while perhaps half the population of Oceania went barefoot. And so it was with every class of recorded fact, great or small.

[...preamble] it was not even forgery. It was merely the substitution of one piece of nonsense for another. Most of the material that you were dealing with had no connexion with anything in the real world, not even the kind of connexion that is contained in a direct lie. Statistics were just as much a fantasy in their original version as in their rectified version. A great deal of the time you were expected to make them up out of your head."

- George Orwell, 1984.

All one knew was that every quarter astronomical numbers of features and improvements were produced on paper, while perhaps half the users of SaaS products complained or left.


The irony is this is exactly what we did (albeit not the whole company) before adopting agile.


I was about to say the same. Before Agile, we had one Friday meeting with development manager where we pretty much did this.


So you planned, built, tested (and fixed any issues from testing) and released features (and supported them) each week?

Edit: I see the team is made up of 50 people across multiple teams. That makes a bit more sense.


Interesting. I did the same within just my team. We used Confluence to publish those, describing accomplishments of every IC. This initiative was started to appreciate the work of the team, so I still was concerned if anyone were reading those reports apart from my team, but I never had a question from the Steering Committee "what all of these peeps doing there?" since I started.

Question: what the format you have used (number of words)? Kind of a release notes? Was anyone actually reading those?


E.g.

"Agents can now see their up-to-date balance on the app (previously they had to wait for the weekly email). Here's a screenshot. Here's a direct link to the QA environment so you can try it out yourself."

That's usually it for a given feature.


that works well when the deliverable shipped and the value can be easily understood by the audience is targeted to, for example product team shipping updates to non-technical people. When working on a backend refactoring, CI/CD pipelines, infra work and other things it's only valuable if the audience is engineering, other people aren't gonna understand the value of a new CI/CD pipeline for example


I run the devrel team at Ultra.io, and tried this for 6 months or so at the beginning of the year.

These types of updates only work in an environment where people actively read them.


Overall this is my standpoint too. But it depends on how KPIs are set. I think it is more destroying to not have any general directions at all.


Non frontend engineers probably despise this.


Did anyone actually read it? The problem with this is the frequency is so high that it would just get tuned out.


My first thought was that I would create a rule to dump these in a folder I never looked at. However, if I were on the team generating that email I would like it if it could truly replace the old ways of providing updates to leadership.


Using velocity as a KPI is to be prevented at all management levels.


scale of team?


About 50 engineers spread across 7 teams


I’m assuming each team has a PM because it’s impossible for an engineer to communicate therefore this couldn’t have happened without a PM being in place to play “telephone”.


All emails were drafted and sent by engineers. They did require some coaching initially, but eventually they got the hang of it. It's mostly plain English.


Good to hear!


I saw this at my last company. We had to set 3 OKRs (goals) per quarter, and each OKR had to be measured by 3 KPIs. Changes to these after the first week of the quarter had to be approved by our managers.

Our CEO had the philosophy that if you achieved 100% of your goals every quarter, you weren't being ambitious enough. As a result, we would stop logging progress at ~80% of our goals, which was the threshold to qualify for your bonus

However, scope on projects would change during a quarter, and sometimes the VP would become distracted by another shiny new thing and make us start on his new pet project. As a result, some OKRs were abandoned and would remain at 0% by the end of the quarter.

Now, no manager (especially an engineering manager) wants to be a bad guy with his employees, so my boss often just changed our OKRs (or, if we only achieved 50% of our KPI, would lower the KPI's target) on the last day of the quarter so that we would all qualify for a bonus.

The company was losing 18mil a year and was sold to our competitor at a 100mil loss for the parent company


> which was the threshold to qualify for your bonus

This is the problem: tying OKRs to bonuses. It sounds so logical from a naive standpoint and yet it has so many detrimental side-effects.

Where I'm at, we have a hybrid system that works quite well. OKRs are divorced from bonuses; and they are always "Stretch OKRs", ie. ambitious ones. Essentially they are the compass where team is headed. Then there are MBOs which are tied to bonuses. These are very conservative, so that the bonuses are attainable.

With this system, in practice, the MBOs are the subset of the Key Results that actually seem achievable within the quarter.

Of course there's a functional way to play this game and a dysfunctional one.

Working, functional version: spend time defining actual Objectives, regardless of whether we know how to measure them. That last point is very important. Only when the Objectives are defined (and they are almost always qualitative in nature), THEN come up with Key Results, to try to quantify the objective (and be open to adusting that). Finally, pick out the KRs that can be realistic, and make them the MBOs.

Dysfuntional version: start at the opposite side. Define MBOs. Then Key Results. And finally, try to BS some form of Objectives to make it all sound coherent. This biases towards both what is easy to measure and easy to quantify, to the detriment of business needs.


> tying OKRs to bonuses

Goodhart's Law: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes."

[0] https://en.wikipedia.org/wiki/Goodhart%27s_law


KPIs are great for self-accountability. They give you a clear goal to track toward and a measurable read on your progress both during and after.

As an oversight measure, they fail for the same reasons as every other oversight measure.

Hire people you can trust, give them skin in the game, and then trust them. If you exclusively interact with me via the carrot and stick, I will recognize the lack of trust and respond with an equal lack of trust.


> carrot and stick

There is too much of this on our industry, and inevitably the churn in our industry matches the service industry.

And then management is surprised when engineers burn out and stop caring. I mean, do they not understand reciprocation? Did they really think treating people like horses will keep people honest and invested in the success of the company and the team?


KPIs are great for extracting additional value beyond the agreed upon job responsibilities. If you can't measure my contributions to your bottom line directly, then _you've_ made an accounting mistake, not me.


There is this thing called stretch goals in OKR that is supposed to be really hard to achieve. So if you meet all 100%, you either don’t have these stretch goals or the team is lying.

Now of course, whether OKR works depends on culture as well. Sometimes, asking people to stretch amounts to asking them to work harder, so naturally the system will be gamed in that context.


What we are learning from this thread is that OKR completion percentage is a very bad surrogate metric for OKR difficulty. That's not surprising; it's very hard for managers to evaluate the ambitiousness of individual OKRs unless they are technical experts, whereas completion percentage is an easy surrogate that should correlate with effort and achievement difficulty. Completion percentage is easily and directly measurable, so of course the CEO rewards that instead. Except that it correlates positively with effort and negatively with difficulty, so it cannot simultaneously incentivize working hard on difficult tasks, resulting in terrible counter-incentives and gaming.


If the KPI is not an objective number (for instance, profit ratio or absolute revenue or Lifetime Value per Customer for x vertical) available on a dashboard I think it is not useful.

You can set up OKRs like improving customer experience and, at the end of the day, someone will declare that they have been accomplished or not based on their preference of employees or mood. That is functionally the same as saying this bonus is optional and we might give it to you if we feel like it.

It is a pity because well chosen KPIs that employees try to optimize for can make a company work great. But they need to be _very_ well chosen. This, like economics indicators, often requires creativity and technical knowledge.


I have worked in ~21 different positions around industries, with about half in the software world.

I have seen this over and over again - a company made up of good engineers without focus doesn't do very well, unless its has accidentally fallen in a market where they have few to no competitors and this circumstance encourages even more lack of focus, because who are you even competing against?.

Even companies that have bad practices and processes, their eng isn't that great, and often putrid management behavior, the focus on a specific goal that customers want made MONEY and sold companies.

In my experience almost all of that came down to the CEO wanting to hold the line and do the thing that we needed.


Clarity of vision above all else is what drives a business forward.


>As a result, we would stop logging progress at ~80% of our goals, which was the threshold to qualify for your bonus

I feel like you're stating this as "Look how silly KPIs are!", but your CEO is correct. The issue isn't the KPIs; it's the team members who are willing to intentionally game a system in conflict with overarching company goals.

There's nothing magical about KPIs. It's data. How you apply it matters. If your team wants to take advantage of it for selfish reasons, there's nothing stopping them.


>> if you achieved 100% of your goals every quarter, you weren't being ambitious enough

The CEO directly created the perverse incentive: actually hitting your goals only meant more work.

> How you apply it matters.

The CEO applied KPIs in a way that disincentived hitting your targets, and the team responded accordingly.


>The CEO directly created the perverse incentive: actually hitting your goals only meant more work.

You say this as if employees are absolved of any responsibility to the performance of the company. You're not clever for saying, "Oh, 100% means we weren't ambitious? Then we'll work 80% as hard." In fact, I think you'd have to be pretty dense to not understand the intention of the KPI: set lofty goals and try to get there.

If you don't think it's working, why not go back with ideas to improve it, rather than doing less work? I mean, this company literally failed (from the sounds of it) and people here are blaming the CEOs KPIs versus the employees who intentionally work less to game the system. Strange.


As the saying goes: if everyone around you is an asshole, you're the asshole.

If you create a system which is abused by 100% of employees, that's on you. Everyone who works at a company does so for their own personal benefit. You can moralize about it if you want, but I doubt you would work for a company if it didn't provide an incentive structure of some kind.

> why not go back with ideas to improve it

I was a junior developer working for a 1500 person company. What should I do, demand an audience with the CEO, who lived in a different country?


The incentive structure is make money by creating things or providing services that other people want to pay for.

Most people willingly turn their brain off when it comes to evaluating their daily/quarterly performance against that simple truth.


I don't care if the company fails and there's no amount of emotional manipulation that can ever make me care. Only double-digit percentage ownership could make me care. Of course you can't get double-digit percentage ownership of a big company like Facebook, but in a company that big there's almost nothing I'm doing to move the needle anyways so there's no reason to be concerned, any shares are gambles.

I do care about my professional reputation. I'm highly skilled and understand how to improve social dynamics on a team, making me very valuable to have around. Clever leadership will figure out a way to align my skills with their own goals. But, if they don't, that's their problem.


> I don't care if the company fails and there's no amount of emotional manipulation that can ever make me care. Only double-digit percentage ownership could make me care.

That's the bottom line, isn't it? If the CEO wants to align employees' behavior and motivation with his own, he needs to structure the employees' compensation to be like his own compensation. If my compensation is just a "competitive" salary and a few token stonks, then I'm going to do a 9-5 job and hit the required 80% of my KPIs before I go home for the day. If my compensation consists of life-changing equity? I'll work days, nights, weekends, and holidays.


How many people can a company scale to where they all get "life changing equity"? Certainly N<10 if you want double digit equity.


It's important not to confuse "I don't care if your company lives or dies" with "I'm going to be a bad worker." I'm an amazing worker.

As I said, I care about my professional reputation. When/if the company dies I want every single one of my former coworkers fighting to get me hired at wherever they land. You don't get former coworkers clamoring to get you in at their new shop by being a bad worker.

Companies want me to work for them, and former coworkers want me to work with them. I am just not emotionally invested in the companies who pay me a wage, I'm going to set reasonable boundaries on my time, and I'm generally going to respond rationally to incentives. Responding rationally to incentives includes considering if the extra work required to achieve a particular bonus is worth the effort, often it is not.


You don't need to give each individual employee double digit percentage (>10%) equity for it to be life changing. Let's say "life changing" for a normal worker-bee tech worker is ~$500K and the company is an average, run-of-the-mill $5B market cap company. We're talking about a 0.01% share for each employee. If you have 1000 employees, totaled up 10% represents life-changing equity for all 1000 of them. Hoping my math is not off by a factor of 10 or something.


Does that ratio of market cap to employee count hold up often? I don't think it does.

You think most 200+ Employee companies are unicorns?


As it turns out, crafting good incentives is the responsibility of management, it's a really hard problem, and CEOs tend to be reluctant to use the same method that boards use to incentivize them: a gigantic package of long-dated stock options. Maybe they think it doesn't work very well.


> think you'd have to be pretty dense to not understand the intention of the KPI: set lofty goals and try to get there.

So, hypothetically if I busted my ass for three months straight to hit 100%, just to be told "Oh, your goals were set too low, so here's more work", what incentive at that point do I have to work even harder? The new target's 80% was my last target's 100%, which I struggled to hit (because as you said, these goals are lofty). So if I bust my ass even more, do even more overtime, what do I get then? "Your goals were set too low, so here's more work." Do I just continue this until I burn out? Or do I hit 80 of my target so my workload doesn't spiral out of control, and I get exactly the same bonus?

> I mean, this company literally failed (from the sounds of it) and people here are blaming the CEOs KPIs versus the employees who intentionally work less to game the system. Strange.

Employees gaming their KPIs is a reaction to and a direct result of the CEO's "if you hit 100%, you didn't work hard enough" policy. CEOs using constantly moving targets and increasingly unrealistic expectations to extract every last drop of productivity out of their employees is met with an equally maximizing reaction: employees find a maximum amount of time and energy they can spend such that the status quo is preserved and the bar doesn't move as high next time.

I would bet KPIs were not the main reason that company went under. Obviously we don't have specifics, and companies rarely fail for one reason. I didn't blame the CEO for the company going under either - I simply pointed out the perverse incentive.


If you don't think it's working, why not go back with ideas to improve it, rather than doing less work?

Doesn't sound like "less work", it sounds like they did the amount of work management wanted them to do. Just like working 40 hours a week is "less work" than 60 hours a week, but I'm not going to work 60 hour weeks just because of some sentimentality around the company being successful.


If a boss gives me a list of five things to do and tells me if I do 4 of them that's perfect, 5 and I didn't set goals properly , I'm going to do 4. Particularly if my compensation depends on only doing 4.

It boils down to that ridiculous "stretch goals shouldn't be achievable because you should be shooting for the unattainable". If you want to have that mindset, don't beat the exployees for achieving everything. Beat yourself for not curating their goals.


Goodhart's law: https://en.m.wikipedia.org/wiki/Goodhart%27s_law

Tracking and measuring KPI is good, without it the company won't know it's current performance and how to fix / improve it (though beware on burnout). However when it's become target with financial incentive, it ceases to be good measure, because everyone will try to optimize things to reach the target.

Velocity (work complete / planned * 100%) should always be a measurement, because it can help to identify errors in workflow / team member, and prevent demotivation when they need to perform task outside their goal.


> If you don't think it's working, why not go back with ideas to improve it, rather than doing less work

the description give didn't make me think it's the kind of workplace where the CEO is going around asking people, "what can I fundamentally change about our goal setting process?"

in any case, goals should be achievable, so if you punish someone for achieving them, you messed up


The really depressing thing about this thread is that OP is the one who will probably end up being our boss.


Yes and no. There's another view where employees set actually ambitious goals and only get 90% of the way there and are still rewarded and the company also wins.

If you are hitting your goals every quarter, that is a great thing - but I tend to agree, at the other end of the spectrum you have people setting easily achieved goals to check a box.

The real issue is weeding out people just there to check a box when you're small, and when you're large, it's designing a system that internalizes a lot of your employees are there to check a box.


the vast majority of workers do not internalize empathy for their company. They are there for the paycheck. You can either thrash against the entirety of the human race and demand that people lie to your face about how much they care about the company's goals, or you can create a system which takes advantage of the self-serving nature of your employees so that their goals align with your own


>They are there for the paycheck.

Well the paycheck ends when the company folds. "Doing good work" is in the best interest of all parties.


For years I was the one who went above and beyond. I thought of brand new ideas, approaches, and wrote so much code some weeks would just fly by.

You know what happened?

1. I got even more work. So much more that I no longer had the freedom to explore and think.

2. The carrot on the stick would be re-attached to a longer stick. It was never enough. I was performing "exceptionally" but my yearly review would never move me closer to my goal.

3. I would end up getting tied down in a bunch of stuff from (1) that ended up leading to burnout because I was no longer working in a place where I felt useful and needed.

4. I still got laid off.

Now, I just go to work and come home. I am pissed I even have to do pagerduty because 9/10 times the problem is someone wasn't given enough time, creating a bug, which was then subsequently never addressed because aGiLe methodology says you only move forward.

"High power" CEOs and PMP-certified project morons are the reason why people's care for your product ends with their paycheck. No amount of "demo days", "email updates", or metrics will fix it. I, and everyone who is like me, will game your metrics until they stop being useful. It's not malicious. It's an optimization. If you want me to do exactly what your metrics ask I will. Nothing more, nothing less. You pay me for 80 hours a check, you get 80 hours. Nothing more, nothing less. If you don't want me to think I won't. Afterall, I'd rather save my brain cycles for things I enjoy. You're paying me to squander my talent. That's YOUR problem. Not mine.


> Well the paycheck ends when the company folds

So, you get another job. With great odds of increasing your salary more than you could at your current position. There's really few downsides for workers in having this attitude.


> Well the paycheck ends when the company folds. "Doing good work" is in the best interest of all parties.

Option A is being an average worker, not trying especially hard, not thinking about work outside of work hours, and having absolutely no emotional investment whatsoever in the wellbeing of the company - and potentially having to get a new job in a few years time, often with a pay increase

Option B is "doing good work", "going the extra mile", "being a rock star", putting in lots extra time and effort and emotional energy (for years!) - just on the tiny chance that your particular efforts will be the difference between the company folding vs. being successful

Option A sounds a hell of a lot better to me than option B.


There are more companies. I also tend to abandon ship before the company folds. I left before the 100mil loss and subsequent layoff.

Additionally, at such a large company (1500 people), my individual efforts have little to no effect on whether or not a multinational corporation goes under.


This is the "I swear communism works, you just need people who actually care about worker life quality and aren't tyrants to lead the revolution" school of org theory.


I mean, that's also probably true. Best form of government is a benevolent dictator and all that...

More that you have to have realistic expectations when you structure an org, that people doing the work are going to pad and sandbag in successive tiers up, and people in charge of outcomes are going to push for bigger and more down.


I would have laughed at that if i hadn't lived the first 14 years of my life in a communist country...


Does capitalism work?


Interestingly, individual firms behave more as centrally-planned economies, and market activity is mostly absent inside of them. This is a long-studied curiosity: https://en.wikipedia.org/wiki/Theory_of_the_firm


The book _The People's Republic of Walmart_ by Leigh Phillips <sp?> and Michael Rozowski <sp?> makes a compelling argument why employing the same technologies companies use to manage supply chains and decision making on a state level could actually produce a working communist system.

Of course, we first would need to agree that this is desirable and that's probably the point where it breaks down.


If you create a financial incentive for a behavior, you're telling your employees you want to see that behavior. If it was a problem and KPIs work, they should have changed them to make them align more with the business needs.


> If you create a financial incentive for a behavior, you're telling your employees you want to see that behavior.

And it's not just individual people modifying their behavior to game things, but more of an evolutionary process that the people who tend towards gaming things will get better bonuses, better reviews, better CVs, hence better next jobs, in perpetuity.

"Don't game it" can't be a simple solution. If there are people who game things, and they accumulate rewards for it, you will just see them out compete the nongamers. You are basically telling people to sacrifice their and their family's financial position for the sake of the corporation.

Everything can and will be gamed. But at the same time if many game it, managers will really want to see evidence of nongaming, so employees are also incentivized to signal their honesty, but this must be given some kind of channel too. If managers myopically focus on gameable metrics, then their loss. You must alwas leave a door open for receiving honesty signals through some kinds of side channels. These must always be somewhat amorphous because the moment you define them, they become gameable.

In other words, no recipe, stay alert, try to cooperate but don't be a pushover, etc.


>If you create a financial incentive for a behavior, you're telling your employees you want to see that behavior.

What a cop-out. Everyone should want the company to succeed. So why not say "Hey, boss, this KPI won't work, here's what might happen..." instead of literally working less than needed to prove some point? It's dishonest.


> Everyone should want the company to succeed

The company has no interest in my success, why should I care if the company succeeds? It has no effect on me if it does.

"That's my only real motivation is not to be hassled, that and the fear of losing my job. But you know, Bob, that will only make someone work just hard enough not to get fired"

- Office Space


Keep in mind, the context of this discussion is that someone who works really hard and completes 90% of their KPIs ahead of schedule will now be financially penalized by the CEO for reporting any additional progress.


If you want employees to prioritize the companies success over a metric, you should make yours and their interests align. Employees may fear pushing back against bad metrics because it might make them look bad saying they can't meet them or that they think they know better than their boss.


You could turn this around and ask why iterations after iterations the company didn't understand that the KPIs were not ideal and employees not happy with it ?

Put yourself in the same boat: if you had a team to manage, with the freedom to give them incentives, and they didn't have the effect you expected, you should be able to notice it and try something else.

That's great if the team is helpful and tells you exactly what's wrong, but you should also be able to evaluate how it's going and react accordingly, otherwise you're just asleep at the wheel.


I feel like that's where most people probably start, and then get some response like "we can't change the KPI's because blah blah" or get the runaround. Once you have established that you have no agency to change the system is when you are sort of incentivized to let it fail in hopes that a better system replaces it one day. You also don't wanna piss too many people off that may be invested in the current system because you want those references for your next job.


When corporations have a duty to their employees and not to the shareholders, then that is when the employees will start to care about the company. As it is, you are asking for a one-way-street. Companies will hire and lay people off as economic tides turn, but then you say that the workers owe loyalty? I don't think so.


> I feel like you're stating this as "Look how silly KPIs are!", but your CEO is correct. The issue isn't the KPIs; it's the team members who are willing to intentionally game a system in conflict with overarching company goals.

The real issue is - KPIs are used to incentivize or PIP employees. Why are you setting the employees in a game that is in conflict with your company's overarching goals?


Someone gamed a system created to be gamed? :pikachu_face:


The real failure is the CEO didn't make a system where by gaming it you made things better for the company. This is a hard problem - but CEOs are paid big bucks to solve hard problems like this.


There's basically always some kind of monkey's paw shortcut to any formalized metric passed top-down.

The converse is that there are also always novel ways to differentiate yourself bottom-up from the mere shortcutters. This is basically how fads form. Something novel appears that is first a great signal for honest quality work, then it gets formalized and imitated and it becomes less predictive, so those who are in the business of creating true value must come up with new ways to demonstrate this.


There's a difference between a shortcut that has the interest of the firm in mind. And sabotage. I'm always surprised how little sabotage is recognized / repressed in corporations. Certainly there is the issue of recognizing the one from the other.

But still sabotage seems to be frequent - as part of "empire building", as part of gaming the compensation metrics. And "boys will be boys" is not the a response that steers toward the desired behavior.


This is why you should switch up criteria you give bonuses by on a bi-annual basis. People who game your system only get 6 months of time to enjoy it. You’re left with people who can game every metric or people who are just so good they look good no matter what metric you pick. The first group is hard to deal with, but they would be hard to deal with anyways.


Or you could keep the metrics secret before evaluation. Except secrets are hard to keep and non-transparent.

One also has to consider whether the people who define the KPIs are themselves properly incentivized. How are they evaluated? Could there be some perverse incentive to set gameable KPIs that get hit easier? Maybe the KPI setter is in some sense in on the game and complicit? Very complicated stuff. What if the business gets destroyed in the process? Who does that hurt in reality and how much?


The CEO doesn't have a grip on the employees' motivation, couldn't set convincing evaluation standards, and couldn't come up with a system that doesn't conflict company goals.

I'm not sure where you see the correct part.


Goodhart's Law says this will never work. That is the problem.

Upton Sinclair — 'It is difficult to get a man to understand something, when his salary depends on his not understanding it.'


Did we work for the same company? Same deal - 3 OKRs/quarter, with bonus tied to company completion of OKRs. People would just claim OKR success a few days before bonus payout with no justification, and the bonus would get paid out anyway. No post-mortem reflection, no introspection on whether or not a good incentive structure was in place -- simple: is the OKR box checked? Here's 20k!


If it was an ad tech company that specializes in video then yes


Professional managers, particularly American MBA types is what destroys businesses. Not KPIs, Agile, Scrum or whatever the fad of the moment to blame may be.

Management is a service task, it does not produce value itself. Good management is important to prioritize, allocate resources and resolve conflicts. But that is not how MBAs seem themselves, they see themselves as the most important thing around, they're the ones in charge after all. And damn physics and reality, the spreadsheets say you can get get a baby out in a month if you use 9 women to do it. Soon management captures all the attention and sucks away all the money, and shortly thereafter end up with a moribund shell of the former company. You can tell how close to moribund a company is by looking at the manager to worker ratio. Get out of there once it climbs much above 1 to 10.

There are good managers around, but they're almost as rare as unicorns.


I've worked at many top big tech firms. Encountered very few "American MBAs", at this point it feels like a mythical unicorn. Alot/Most of tech companies are ran by ex-engineer H1Bs


Most managers I encountered sooner or later would get an MBA, particularly as they made it further up the corporate ladder. Yes, it makes sense to get the specialized training being management needs, but an MBA almost never is that training.

And re ex-engineer H1Bs: just because you used to be an individual contributor before becoming management does not imply you retain your individual contributor skills nor that you'll be a good manager.

Most people across my career that became managers did so because the pay was better, workload was lower and promotion ladder more straightforward. Worst managers were usually the type that expected and demanded to be management after X years as an individual contributor; this is particularly true of people from some countries, where it is culturally expected that you'll be a manager within at most 10 years of starting.

Most companies do treat moving to the management ladder as a promotion, which create all sorts of perverse incentives for many to become managers. Reminds me of a blog from a few years back:

https://charity.wtf/2020/09/06/if-management-isnt-a-promotio...


I've said this for a while: the solution to the management hell that we currently have is to bring back secretarial roles. The introduction of the personal computer into the workplace basically killed off the secretary, and spread ~50% that work over everybody else, especially management. So we bloated management to cover that. The problem is that when your secretarial work is being done by the people with the power to fire the ICs, you end up putting excess value on the secretarial and organizational work.

So instead, bring back the secretaries. A separate org vertical (i.e., they don't have hiring/firing power over ICs), which handles the secretarial half of your current managers' jobs, and isn't paid more than your engineers. The engineering decision-making half of the job (along with hire/fire power) should be handed over to one or two senior engineers, who are now the actually powerful ones in the engineering branch of your org chart. This keeps your technically talented people working in technical roles. Basically, instead of

       C-suite             C-Suite 
       /      \              /  \
      M        M            EM   SM
     / \      / \          / \  / \
    M   M    M   M        E   ES   S
   / \ / \  / \ / \
  E  EE   EE   EE  E


This is a good direction. To boot, I think ES is the best way to remove administrative work from EM's and IC's plates.

Further, I also think that EMs need to then become more technical aka a real "leader" of the services they are responsible for. So hiring, firing, perf, BS meetings becomes only 5% of their job. 95% of their job is to ensure that services keep running and projects get delivered - like a true leader. ES can help them and ICs with administrative tasks like HR follow ups, OKR tracking, meeting scribing etc.

I can't believe how much work is being done by low level EMs and ICs in tech companies. It is almost exploitative in one way.


What tasks should this secretary handle in a software company?


They are relatively rare in big tech, and the culture of being nominally “anti MBA” is pretty aggressive in big tech, but there are a lot of them still there. I think there is some bias to not notice them because usually middle and upper management at big tech are the movers and shakers of 20 years ago during dotcom and such when the companies were growing rapidly and SV was a lot less corporate. And if you’re a peon, you probably don’t interact with management much beyond your director, which IMO tends to be a soft ceiling for advancement for technical non-mba types these days.

It’s pretty common ATM for product managers to not have direct programming or software backgrounds and instead have a career like “MBB consultant -> MBA -> product manager”. Big tech companies also insource a lot of “strategy consulting” which are MBA types. And marketing/biz dev have plenty of MBAs too.

While someone like a “director of engineering” probably doesn’t have an MBA, there’s a good chance their boss does (especially since these days VPs seem to mostly be former product managers rather than former software engineers).

By the way, Tim Cook, Satya Nadella, Sundar Pichai, Andrew Jassy, and Cheryl Sandberg all have American MBAs.


I wonder how many of those went to get an MBA. That's the point of this masters, giving management knowledge to people with non-management degree. Like engineering + MBA -> promotion. I've worked for companies who pay MBAs for employees who are on-track to management.


You have described my company's progression to a T. As soon as management empires got built, we lost focus and the entire job was just playing management games. Managers wanted better ratings than perf managers, so they ran their engineers to the ground.

Scoping was whatever manager decided the right scope and timeline was.

It's almost embarrassing to see it in action. I am embarrassed that I am being asked to toil and grind for such stupidity.


I would be interested in everyone's opinion on that 1 to 10 ratio ballpark number. What has your experience been like?


Goodhart's Law: “when a measure becomes a target, it ceases to be a good measure.”

https://en.m.wikipedia.org/wiki/Goodhart%27s_law


We have this 5,800 line long shell script that does our testing. The framework records the exit code. They started logging the exit code to a dashboard.

So now line 2 is exit 0.


So you're not testing anymore, because a failed test would show up on someones dashboard?


This has to be a joke. One way or the other.


I would hope so, but I can see it in an environment where you're penalized for "incidents". Take the AWS dashboard, it almost never show any downtime and based on comments on HN that's because it would have a negative on the team in charge of the failing service.


The anxiety in your comment is almost palpable. Let me heighten it a bit. Some time ago, there was a post about obtaining minimal code coverage. They achieved this by adding a number of useless statements for every increase in code size. I also really hope that's not true. http://reddit.com/r/ProgrammerHumor/comments/11dou0s/found_t...


TIHI. I refuse to believe it for my sanity's sake.


I did consulting for a decade at dozens of mega co. Oh the things one sees!


Increase your testing speed by 50% by replacing your `#!/bin/bash` (or whatever) with `#!/bin/true` and reducing your script to 1 line.


That would stop the GP from logging the errors as "warnings" and treating them without triggering his boss.

And yeah, I can totally see somebody doing it on practice. I have seen people do lots of similar things.


#!/bin/yes works too, if you want to kill the KPI script


you can go even faster by piping to \dev\null!

It's very performant.


Is piping to /dev/null webscale?


Does it support sharding?


It is literally impossible to support sharding better than /dev/null does.


We just need an elastic load balancer, a reverse proxy, a couple /dev/null sinks, observability & metrics, backup, green-blue deployments, auditability, staging and preprod environments, and a month or so of SWE time.

That will be well over $20k from the corporate salary budget, thank you!



Nice traditional setup. No Microservice Workflow Orchestration. (Yes that is a thing)


If this is true as you describe it and has been true for more than a few days then I am reasonably sure the organization you work for has at least one but probably 2 or more problems that are bigger than both Goodharts law and exit 0 on line 2.


I used the present tense but this was actually quite a long time ago on another team at another company. Please consider it anecdotal.

As a child, I worked at a large popular fast food chain. Also measured by error metrics, the trainer showed me how to fry a frozen chicken sandwich. The meat spilled to the floor on accident. He said, "here's your first lesson: no one saw that!" Into the frier they went, metrics perfect.

It's all the same bad faith. Managers put odometers on every interface, workers spend their time developing odometer foulers. I could never own a business I wasn't the sole worker at.


You have a … wat?


Also related, the perverse incentive and the cobra effect.

https://en.m.wikipedia.org/wiki/Perverse_incentive


Both are almost unavoidable when your management layers grow beyond a certain level.

Most of the complexity, inefficiency and the related failures in today corporate and institutional world is driven by too much management. The map has become the territory.


It sounds like a profound wisdom and on a very surface level it does make sense, but think of this: if you can assess that a measure is a bad one, this means that you have your own intrinsic preferences, otherwise you wouldn't be able to tell that!

Therefore, if you are unhappy with a measure, it means solely that it doesn't capture all of your preferences properly. Which is a technical problem rather than a philosophical one.


What your saying sounds like profound next level wisdom and on a very surface level does make sense, but think of this: an org can create a measure that captures the relevant preferences properly, and everyone is quite happy with the results because the team continues or slightly modifies what they’re doing and lifts up the product to lift up the measurement and life is good.

And then over time they start realizing they don’t have to lift up the whole product, but just a small piece to increase the measurement. So they do that and the measurement goes up, but the product doesn’t get better because they’ve found the path of least resistance to raising the measure. This is really the underlying crux of Goodharts Law. So it was a good measure probably until it became a target.

So what is a manager to do? “Capture all [the] preferences properly” as you put it? Probably not because that quickly devolves from measurements to long form status reports, not even measurements, because it’s impossible to capture all dimensions of this with measurements, so one has to reduce the dimensionality a little.

This is a philosophical problem not a technical one. Though your point does seem superficially correct, in practice with real teams the second and third order effects from the measure becoming a target dominate.


The aphorism is pointing out that if something is made a target, people will game the target, even if that has very bad effects on the company or product.

So even a good measurement is vulnerable to this problem.


It ceases to be a good measure: it captured your preferences properly prior to becoming a target.


> It sounds like a profound wisdom and on a very surface level it does make sense, but think of this: if you can assess that a measure is a bad one, this means that you have your own intrinsic preferences, otherwise you wouldn't be able to tell that!

That assumes that the ones working towards a specific KPI are both the ones noticing the metric is bad and that their are noticing it before the negative consequences have destroyed their product / company.

Often when the issue of optimizing to KPI comes up, it seems that exterior people notice the problem but not interior ones, or otherwise the issue is noticed after the fact but not before a meaningful change could be affected.


> Therefore, if you are unhappy with a measure, it means solely that it doesn't capture all of your preferences properly. Which is a technical problem rather than a philosophical one.

Here, you're assuming that the space of possibilities is an ordered set. But it's probably not, so your “technical problem” is in fact a mathematical impossibility.

With KPIs, you push people to maximize a very-straightforward-but-fundamentally-flawed metric, instead of relying on their own judgment. Or when not trusting your employees end up having them behave like brainless bots.


good == honest in this context


You can't really align (hahahaha, hahahahaaaaaaa, align, anyway) stated goals of a company under capitalism with actual goals, because there are multiple competing games being played at various levels and when you actually pick a level and make their goals explicit, an actual class war would quickly ensue.


Show me the incentives and I'll show you the results.

It's my favorite Munger quote, and it's a really good one. So often businesses invest so much time, effort, and money into everything but incentives. No matter how much you spend on culture, perks, diversity, swag etc, if the incentives are not aligned to some over-arching goal internally, you will not get the results you're looking for.

Let me ground this in a concrete example. If your KPI is a certain number of calls per day from a sales person, here's how the most Machiavellian caller will hit it:

1. Spend little time with good prospects (they're paid by call, not by conversion!)

2. Call old numbers/disconnected/rapid disconnects

3. Call multiple numbers at once, reduce call quality etc.

For any other metric you pick (conversion rate, call time, etc) there are similar schemes that have undesirable side effects. Either you very carefully build a composite framework of incentives to use, or you take the index fund approach and compensate on overall company performance (like a stock bonus, profit bonuses, etc). Personally, I think the best option is stock bonuses, but you may disagree.

Either way, getting those incentives right is absolutely key to performance.


Stock bonus means:

1. I have less control over my bonus

2. My bonus depends on the performance of others

3. I can just do the bare minimum and still get a good chunk due to others


Sure, but it has plenty of other desirable outcomes (some of which are more for your employer than you):

1. There is a risk that the stock loses value but stocks generally have positive EV.

2. Since stocks vest over time there is some intrinsic value in being able to simply walk away if the stock crashes (dropping your comp going forward) but being able to stay if the stock surged (greatly increasing your comp compared to your market rate). It’s kind of like having a long-dated option.

3. It generally takes months to a couple years for stocks to change in value enough for it to put you over market rate. It takes similar amounts of time for you to fully ramp up and start adding value to the company. Generally you tend to get into golden handcuff range right around the time you start settling in. Getting paid above-market in a role where you’re doing well is great.

4. You may not have enough influence to really move the stock up but you often do have enough influence to move it down (like being sloppy or doing something destructive like leaking).

5. From the employer perspective, aligning your financial interests with the board/founders/investors/execs probably means you’re unlikely to try to unionize or get angry at the company when it gives you only a 5% salary bump in a year where the stock increased 100%. Generally speaking the worker:exec relationship is less combative since there’s less of a conflict of interests. As an employee there are nice aspects of this, like the whole company having a more collegiate or team-like atmosphere.


Correct, certainly not perfect. I actually view 1,2 as features, and 3 as preferable to some of the active harm that comes with other incentives. You could reject that and optimize on something else.

The goal is really to understand the impacts of whatever incentives you choose. I don't think there is a side-effect free incentive.


Clickbaity title submission. The actual title of the blog post is "How to avoid KPI psychosis in your organization?" and it talks about why blindly following KPI can be bad, that humans are good at hacking numbers etc. and basically says "don't use them blindly". It's also framing it in the context of tech businesses not all businesses in general. So not sure how to jump from that post to the submitted title of "Why KPIs are destroying businesses".

KPI can be very useful as sanity checks and it also varies quite a bit by industry if and when it makes sense to lean more of them. Anything involving physical goods/supply chains usually benefits greatly from having good KPI in place. Same for financial information. The "I" stands for indicator, I think it's usually a good idea to set up some KPI with boundaries and alerts to resteer the ship if anything goes badly off the path (cash flow indicators for finance for example).


Basically McNamara fallacy but more subject specific:

https://en.wikipedia.org/wiki/McNamara_fallacy

The KPI is a measurement tool not a goal.

If you run a race, your goal should not be to beat ussain bolt's record, your goal should be winning with your best possible time which could or could not be much better than usain's. It is your coach who is supposed to measure you and observe how you are running and training and help you adjust that based on his/her expertise on the subject that is supposed to use usain's record as a measuing stick as they help you adjust and adapt to break the record.


This.

Step 1: measure only that which can be easily measured Step 2: ignore what can't be easily measured Step 3: assume that which cannot be easily measured is unimportant Step 4: assume that which cannot be easily measured is not even real

Plus yeah, also Goodwin's Law.


Cool. I'd not seen that one before. I was going to link to Goodhart's Law, but the McNamara fallacy seems broader and more interesting. Thanks.


KPIs are such an incredible waste of time. At my last job out team was tasked with coming up with 5 KPIs, that alone took many hours of discussion (times 6+ people) and then to implement the KPI reporting we would have to start logging/recording a ton more data (manually, in JIRA). On top of that we had to write code to pull that data from JIRA so it could spit out the KPI number.

After all of this I once again pointed out how useless and stupid the whole process was and how easy to game it was. We could easily lie in our scripts if we wanted to, it wasn't like management was capable of checking, and it was going to encourage reclassifying/hiding certain bugs (hard issues) and pretending other things were bugs (small changes/minor feature requests) to juice the numbers.

The whole processes took a _ton_ of time and in the end I never heard another department's KPI numbers (we were told every dept was doing it and we would all report monthly for the whole company to see) and I'm fairly certain it fizzled out though I left shortly after that.


>We could easily lie in our scripts if we wanted to, it wasn't like management was capable of checking, and it was going to encourage reclassifying/hiding certain bugs (hard issues) and pretending other things were bugs (small changes/minor feature requests) to juice the numbers.

But why would you do this???

Holy cow, some of these comments. As someone fixing bugs, why would you not be interested in the classification, status and tracking of bug-related data? Why would your first instinct be to lie about what's happening with the business? To what end?

Honestly, this thread has me wondering if the average worker does so little that they see KPIs as a way of being accountable, and don't want that in any manner.


We have this at our job too.

We do pentesting to prevent us having vulnerabilities and maybe even be hacked.

But then the new manager wanted a KPI. In their infinite wisdom the management people decided that "Cost saved by preventing a hack, in Euros per product" was going to be the KPI.

So now a bunch of pentesters have to try and estimate what the impact of a potential, never happening (because we found it) vulnerability would have been on our company, if exploited by a malicious actor.

After a while of guesstimating this they started giving us flack for the number going down. We tried explaining over and over again that this makes no sense as a KPI. We're just guessing essentially. But no, they want the KPI and they shall get the KPI.

So now we just guesstimate the potential cost a little higher every month, but not too much.


I cannot understand why there are not 3 or 4 categories of incidence severity (user data exposed, etc. I think there is a standard or multiple) and management maps each of those to a quantity in euros. Then everyone would get what they want, since it is just an estimate.


There's a standard for this called CVSS. We pointed it out but it gives a scale 1-10. And they really wanted € KPI.

Oh well.


> Why would your first instinct be to lie about what's happening with the business? To what end?

If your compensation or continued employment is tied to metrics, especially metrics that aren't inherently valuable, then there's much more incentive to game/fudge them than to do the work to actually resolve them.


The article touches on this somewhat. If you tie any sort of monetary benefit to KPI results, you've created an incentive to either game the system or lie. If you lose the context of the work by trying to simplify metrics and then base people's worth on those metrics, people aren't going to care as much about the quality of their work.


Humans are lazy. Most take the path of least resistance. You're incentivized to hit a goal - if you can hit that goal with minimal effort by obfuscating or exaggerating to naive management, why wouldn't you? You're just an IC, a worker bee, you probably don't have equity or any real ties to the company's overall success (which isn't an engineer's job to define in the first place).


Grug find bug, Grug beaten with club. Grug no find bug, Grug no beaten with club.

Grug work hard for own KPI: amount of times not beaten with club.


I think op is saying that he is gaming the stupid system, to be a bit less stupid and unfair. Not that he wants to cheat or not fix issues. Classifications of issues is indeed important, but if management is not playing a fair game, then neither am I, while still doing my job, tracking and fixing bugs efficiently


It sounds like you have a dysfunctional organisation as much as KPIs are a problem. KPIs and everything around them have been studied to death in more than the last 40 years, so there should be few surprises around when they are useful or not. People and politics are the constant so we never get away from organisations like the one you describe.


Here's a good Q&A as well as links to some good reads there: https://stackoverflow.com/questions/1168131/how-to-measure-s...

KPIs (as a target) are undoubtedly good for something that's directly benefit the company (with the caveat of long or short period), and the task is static. This is good for 40 years ago where factory worker are a thing, which nowadays, those good KPIs are mostly replaced with machine specifications.

What those people's complaining about the KPIs are for knowledge tasks, usually software development, where it's full of uncertainty and almost impossible to estimate (see https://en.m.wikipedia.org/wiki/Hofstadter's_law). If you can't estimate the work, how can you make targets? You can only measure it and hope that the measurement can help for future estimation.


I think the first thing you need to do is get out of the programmer mindset. The problem you are solving may be closer to resolving the anxiety of leadership, or build confidence. In reality, the situation is complex and our responses may undermine confidence and drive anxiety at different rates with different stakeholders If you ask some folks from other fields to read the StackOverflow link you shared, they'd provide some insights that aren't fully explored.

I don't think you can get there with KPIs, but at the same time, you can't completely discard methods like these which have some level of credibility with other stakeholders. As evidenced by many conversations on this topic involving programmers, KPIs get completely thrown out, or worse, undermined. Insufficient time and effort is given to actually making the leap to something that might be useful sustainable. Hence, many places never break away from KPIs and the complaints keep coming.

Measurement without particular targets can be quite useful when the organization is open to sharing data without punitive responses. This gives you different axes to explore with regards performance so that you can shape your messaging around performance provided you can contextualize it.

EDIT: My personal philosophy on how strategy should work and be implemented is closer to that of Mintzberg (https://en.wikipedia.org/wiki/Henry_Mintzberg) than Porter.


We're talking different things here. Many others are talking about incentivized KPIs or KPI that become target. You treat KPI as measurements which is a legit things to do.

IMO keep measuring, identify all the problems and fix/improve them, and reward based on team achievement rather than target, it'll boost morale overall.


I wish more companies would be able to work with KPIs as _one_ part of the equation, rather than the be all end all. I haven't seen a lot of that :(

In my experience, data driveness quickly leads to an unhealthy focus on short term results - cause that's usually what's most straightforward to measure and impact. What you get is teams hunting local maxima. What's missing is the guts to follow a vision/intuition regardless of what the numbers say, at least some of the time.

It seems hard to accept that we can't measure everything that impacts the long term success of a company, but I don't see why that typically translates to not caring about it altogether.


Managers like dashboards. KPIs are dashboards.

Your car dashboard is great for staying within speed limit, but now try driving without taking your eyes from the speed dial. :-)


Do managers actually like dashboards and KPIs? Not all managers report directly into the CEO, so suffer from the problems as much as their underlings.

Management want indicators of certainty and progress with the least effort possible. Superficially, dashboards composed of KPIs give them what they want. Good CEOs are unlikely to rely on these, but instead think of them as an alignment and investigative tool.

Each layer of the organisation aligns itself around some KPIs with lower levels often feeding upwards through the hierarchy. As you further down the hierarchy things get messier as they are further from the executives, may have less experience/skill with using tools like KPIs in the way the CEO wants, and there many more teams to align across with problems like duplication, etc.

The majority of engineers are at the bottom of the pyramid. Their management is coming from their own ranks with little support, so it's not a surprise there is misunderstanding and misuse. In programming, when the career ladder often involves stopping dev work and becoming a manager, it only makes things worse. When I see peers promoted into management, I always hope it's the ones that have capacity for leadership versus experience because they are going to be the ones that adapt KPIs to match the internal and external needs of the company.


Indeed. There's no substitute for vision.


great analogy!


It's even great in the sense that a speedometer tends to make people aim for "I need to go 65 mph" instead of "I should not go above 65 mph" which a speed limit is actually intended to mean. The speed limit became a target instead of being a limit.


Traffic flows more smoothly when everyone is going at about the same speed. To achieve this without communication between vehicles, you need some "obvious" speed for everyone to try to drive at. In many contexts the speed limit fits this need pretty well.

There are situations where it doesn't. These are mostly also situations where the speed limit doesn't work terribly well as a limit either. For instance, bad weather or winding roads may mean that driving at or near the limit is unsafe. In that case, sure, it's not good for everyone to coordinate on driving at the speed limit, but it's also not good for everyone to think of the speed limit as the right upper bound for driving to be safe.

So I think that in most situations either (1) treating the speed limit as a rough target as well as a limit is mostly a good idea, or (2) the speed you try to use as a limit should be something other than the nominal speed limit.


Vehicles can all synchronize speed independently based on a well-tuned control law while observing only the distance to the vehicle in front of them. So looking out the window is sufficient for that, if the drivers are not terrible.

If driving at the speed limit on a winding road is unsafe, the speed limit is probably too high for that road. But in practice, it's complex, winding roads that cause people to drive at reasonable speeds, rather than numbers posted on signs.


The speed limit being a target is how it is treated in driver education and tests in the UK.

You can fail a driving test if you spend too much time driving more than ~5 miles per hour below the speed limit in safe conditions.


This is largely because speed limits are too low. In safe conditions, most roadways are built in such a way that a modern vehicle can safely go 10-20mph over the speed limit without difficulty or increased risk. There are several reasons for this.

The point though is that timidity in a driver is actually more dangerous than exceeding the speed limit. Being consistently below the speed limit is a strong sign of timidity.


I think this is overstated.

My driving instructor was (rightly) very hot on the principle that you must always be able to stop within the distance you can see. Speed limits are not set with this in mind.

In the UK, for example, that means that on winding country roads you should often be doing a lot less than the 60mph speed limit.


I agree with your instructor. That said, modern cars can stop in /much/ shorter distances than previously, largely thanks to improvements in tire technology.

Nearly every production vehicle available today can brake from 60mph in less than 140ft, more performance oriented vehicles can do so in under 120ft. The standards for “safe stopping distance” used in the US triple this distance to account for slowed reaction time because drivers are not attentive.

Realistically less than doubling these figures is correct for attentive drivers. 60mph is 88 feet per second, and 1 second is plenty of reaction time for an attentive driver, meaning 200-240ft total stopping distance, closer to double the distance the vehicle is capable of rather than triple.

So if the visibility is less than 200-240ft, you should be going slower than 60mph. 240ft isn’t very far in a car though, it’s roughly the distance traveled in 3 seconds at 60mph. There’s certainly some country roads I’d slow down on, but if you’re paying attention you’re well good on most.

For what it’s worth, the average reaction time of a gamer is 150ms, so a full second is plenty of margin of error.


U.K. stoping distance is 240ft at 60mph, 120ft at 40mph. 75ft at 30.


> the average reaction time of a gamer is 150ms

It's rare that those gamers have a crying toddler in the backseat while being sleep deprived and the sun coming from a less than perfect angle.


Also not generally being surrounded on all sides by other unpredictable gamers whose own split-second reactions are going to determine what their safest move should be.


I don't think it's so much about the timidity of the driver, it's the effect on driving slow has on people behind you. Driving below the speed limit increases the amount of drivers who need to do overtaking, which are moments substantially more dangerous than just driving straight. If it weren't for forcing other people to overtake I don't think we'd really care, even if it was a sign of timidity.

Very common to see older people drive slow on the highway because going 100KM/h seems scary to them. This is, of course, absurd, since driving 80KM/hs on a 100KM/h road causes way more moments for bad things to happen because you're forcing everyone to merge into the speed lane, including trucks with bigger blind spots, rather then only the drivers choosing to go above the speed limit. If you do this you should absolutely be fined, and that's not because it's a good metric for timidity (which I agree can be bad for drivers in general), but rather that the action itself causes harm.


> Driving below the speed limit increases the amount of drivers who need to do overtaking,

Nobody "needs" to overtake or is "forced" to doing something dangerous outside safe margins. Seriously, this is blame shifting when somebody does something stupid. When you do, it's 100% on you.

100km/h on a highway with everybody otherwise going 130km/h is rare. This is about a 65mph road (100km/h?) where somebody goes 95km/h and everybody behind them goes berserk. That's an attitude that needs to change, not by the 95km/h driver, but by everybody behind. Going 100km/h is not a right, does not force you to do stupid overtaking maneuvers and the time you lose by going 95km/h is negligible. At least it's substantially less than you'd expect, since there are many other factors that slow you down on a journey. You rarely drive for 3 hours straight, no interruptions, no other traffic on the road. Even if you did, doing 100km/h instead of 95km/h would get you there by 9 minutes earlier. That's the upper limit here, but you are likely going to save less than 5 min. After 3 hours.


We (for some measure of we, but at least the US and most Commonwealth countries) have laws against this. It's called "Impeding the Flow of Traffic". As far as it matters, you are absolutely and directly wrong about your statement here. It's the 95km/h driver that needs to change.

You seem to have an agenda in your comments here, and in some of them I think you have a point, in this one you are overplaying your hand.


50mph on a 65mph road may be "impeding the flow of traffic". 63mph certainly isn't.

I'm a bit confused regarding your insinuation of an "agenda" in my comments? I've been in enough situations where somebody behind me or others on the road was endangering everybody just because the car in front of them wasn't going the $safe_margin_above that they were expecting of everybody. So they go like 3ft behind them and put pressure and stress on people. It's a dangerous mentality that can get people killed. It's not the "slow person" (going at 63mph) that's the danger in that scenario.


I personally find GPs agenda of promoting patience, common sense, and safety on the roads to be both dangerous and offensive. That kind of mentality is only a hop and skip away from becoming a deranged anti-car environmentalist.


> This is largely because speed limits are too low.

Research is clear. For every 5mph increase on a highway you can calculate how many people per year per mile will die more.

If you want more people to die, then sure, speed limits are too low.


In the U.K. the motorway is by far the safest type of road. https://www.driving.co.uk/car-clinic/advice/are-motorways-sa... Etc

Could you point to your research.


> The point though is that timidity in a driver is actually more dangerous than exceeding the speed limit.

This is a bizarre claim, since most accidents/deaths would be avoided by cars being slower. Unless you have data on this, this is just "I like to drive faster" with more words.


Nearly every study on the topic has concluded that faster vehicles get into less accidents, but the accidents they do get into are more likely to be fatal. Conflating accident rate and fatality rate means you can’t see the forest for the trees.

At any rate, neither statistic has anything to do with my claim. Timidity is dangerous because it’s unpredictable and creates a rolling roadblock. The most dangerous thing while driving is /other drivers/. Being able to predict their behavior and avoiding having to overtake both greatly reduce risk. Timidity increases both of these risks significantly.


> Nearly every study on the topic has concluded that faster vehicles get into less accidents

Citations definitely needed. Not just of the studies that perhaps support your point, but of the reviews that do a quantitative overview of that the majority of studies conclude what you claim. Everything else is "I want to drive fast, slow grannies out of my way".


Fatal accidents are so more impactful than non-fatal accidents that saying people should drive faster is absurd.


My car has a HUD which puts the current limit and the speed I’m doing on the windscreen


The general approach is: performance, when measured, improves; performance, when measured and reported, improves exponentially.

KPIs might be defined incorrectly, but their use is simple and straightforward for people to understand. This gives rise to Goodhart's Law, "when a measure becomes a target, it ceases to be a good measure." The best KPI is something that can't be gamed by those whose performance is being measured. This is probably impossible in practice.


>data driveness quickly leads to an unhealthy focus on short term results

In my experience it's not data, but when finance and/or sales call the shots. Their incentives are quarterly so without good engineering pushback you get a lack of long term vision


The most vivid (and horrific) example of what relying too much on metrics can lead to are Robert McNamara's policies during the Vietnam War. [1]

Also, because defenders of KPI-driven management love to use the saying that "you can't manage what you can't measure", it's worth repeating the original quote where this is lifted from:

> "It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth." (W. Edwards Deming)

[1] https://en.m.wikipedia.org/wiki/McNamara_fallacy


That's the original quote? That's the opposite of "you can't manage what you can't measure."


I haven’t heard either quote but a quick search yielded this: https://deming.org/myth-if-you-cant-measure-it-you-cant-mana....


Exactly. Most people I witnessed saying this never even heard of Deming.


It's certainly not the origin of "you can't manage what you can't measure". For a start, Deming is almost a century too young for that.


Ah, that's interesting. I always thought it was a selective quote lifted from Deming. Can you tell where it originated? Taylor, or even before?


The counter-saying to that is "not everything that counts can be counted, and not everything that can be counted, counts."


It comes down to at some level the right measurement is useful.

The problem is the middle manager owes the VP 12 measurements. The clever supervisor picks the easiest to achieve metrics possible.


"At some level" is the crucial point. But all too often metrics are a stand-in for the overall goal, and people forget their reductive nature, thereby allowing the system to optimize for the metrics instead of the underlying goal.

Ultimately, the "some level" you refer to is a very basic level; even more basic than what one would intuitively think.


Absolutely. My employer had an influx of GE refugees about 15 years ago. They would pursue this nonsense with considerable zealotry.

They brought with them the church of Jack Welch, which was ironic as that culture pretty much chewed them up and spat them all out.


Yes, because the metric by which the manager measures employment is getting paid.

If the VP measures “goal accomplished” instead of “potential goal impact * % realized” then the VP will get easily accomplished worthless metrics.


Yup. One of my triumphs at a previous job is that we inserted “attempt” into a bunch of “do this” metrics.

The dashboard was 100% green - we tried to do x, y, z. The auditors were laughing when they verified our attempts. But the bonuses were paid and all was well.


I hate this stuff. I generally work on internal platforms within tech orgs, and we periodically get this shoved down our throats. Tech management wants us to come up with KPIs, despite not having any of their own for us to use for alignment. Our product guy who drives our roadmap doesn't want to put any work into KPIs either.

So instead some second-in-command on the team has to come up with some KPIs that they will have no control over addressing (they aren't team lead, they aren't product, they don't own the backlog or roadmap).

Inevitably there is a bunch of angst from management that the KPIs are too tech/engineering centric (well..) while again providing no actual good KPIs themselves, and then they all get missed anyway since no one is incentivized to address them.


Oh it's even worse when senior leadership gives you KPIs. I remember when I was a fresh face college grad working at Dell Services.

One of my KPIs was about selling more headcount to the client.

None of the KPIs that came down from on high made any sense. I could be the best junior engineer in the company and it would have no impact on my performance review.


Irrelevant management KPIs at least tell you what management cares about.

"I'll know it when I see it" iterations of bringing different "no not that" KPIs to management is frustrating in the opposite direction.

Ultimately management wants to set the tone in both scenarios, at least when giving you dumb KPIs they have done some work and volunteered information.


The best framing for this is still: "You get what you measure". So if your have a KPI that says 20% growth in user base per year, you make it easy to signup, hard to leave, but frequently forget to measure activity, user satisfaction or profitability.

I worked for a company that wanted some growth in business, so they started to sign contracts for jobs for which we had little to no qualification. We got new customers, more business, more work, but employees started to leave or burnout.


Along very similar lines…

There is a saying in BI that goes: “What you measure grows.”

So let’s say you measure MAU, the infamous vanity metric. And let’s say your MAU increases with each dollar you funnel toward advertising, the thing you’re actually growing there is your dollar spend and not the growth of the user base.

Thus it is your advertising dollar spend that will grow and not the size of your user base.


Your comment would be significantly more accessible to the general HN crowd if you explained what BI and MAU stand for.


Business Intelligence and Monthly Active Users


I worked at a giant mega corp as low rung salesman and the major KPI was sales numbers. Until about June when if sales were looking bad region wide it turned to waste(returns) as the major KPI. If you knew how to straddle the line, by about May you could pivot to the KPI of the moment.

Every year some new hire would go out and sale 15% of sales plan and would be praise as a superstar. The next year they'd be written up for some bs reason and after a few years they would be flushed out.

What did they do wrong? They out performed sales plan in the first year by 15% and that meant that now every other year their plan for the year would add 5% growth on top of the 15%. Since the initial 15% growth was unsustainable the waste grew and magically they were now failing at the 2 major KPI's.


I've witnessed this far too often in sales. First, a team lands a giant deal that is far above plan and celebrates with Club status and a trip to a tropical Island. The next year the algorithm decides they need to get that deal + 30% more out of the same customer. There is literally no new workload to sell and the team underperforms plan despite a massive run rate from the original deal.


Do people still try to do KPI? I haven't worked in an organization that has done that in ages. Its so well known to not work and backfire, its a giant red flag if an organization implements it.


From my limited experience they seem to have shifted to OKRs, which are pretty much the same thing with a different name, at least for the companies I've worked for. I hate it just as much.

Currently two of my OKRs for this quarter based on what I thought a project was going to be and have since discovered I was wrong, so they don't actually matter and I'll have limited opportunity to achieve them. Great, so now they get a fun data point can point to in order to give me a smaller bonus next year.


What I noticed is fashionable now are OKRs. Never seen them work correctly though.


Oh it’s still filtering its way down through the big companies. Most startups eventually hire someone who had experience with metrics driven management at bigco who starts selling it as a panacea.


Agree with the red flag. I'd hesitate to join a place led by KPIs


I'll bang my old drum once again: There's nothing wrong with KPIs, the problem is with the organisation that introduces KPIs. Companies that are doing well and succeeding rarely bring in KPIs, because they don't need them, they already have a system that works. It does sometimes happen when some ambitious young fellow decides he wants to be a manager or be known for IMPROVING PROCESSES! But really, mostly KPIs are introduced because the business side of the organisation isn't happy with what the engineering side of the organisation produces, so they start to try and come up with ways to measure engineering's failure. It's manifestation of a struggle between two parts of the business.


McNamara fallacy:

> The McNamara fallacy (also known as the quantitative fallacy),[1] named for Robert McNamara, the US Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.

> US Air Force Brigadier General Edward Lansdale reportedly told McNamara,[4] who was trying to develop a list of metrics to allow him to scientifically follow the progress of the war, that he was not considering the feelings of the common rural Vietnamese people. McNamara wrote it down on his list in pencil, then erased it and told Lansdale that he could not measure it, so it must not be important.

https://en.wikipedia.org/wiki/McNamara_fallacy


Ooh, good example! McNamara's Vietnam War-era career is, in general, a long-ass list of well-meaning, manufacturing-inspired data-driven decisions and efficiency drives (after all, he was previously president of Ford) that horifically backfired. I wonder whether there's a "Mythical Man-Month" style book about that out there.


I would suggest that the problem is not KPIs (or Agile, or whatever else is ruining your particular company). It's shitty management. KPIs (and Agile, and $buzzword) can be useful, when done right. If you get hit in the face with a hammer, is the hammer the problem? Or the idiot who swung the hammer at you?

Time and time again I've seen managers come up with a Great Idea(tm). It works for a little while. And then it turns into a shitshow because it goes from being a useful idea to being dogma.


Before Agile management didn't have a hammer to swing at you in the first place. Micromanagement required actual work.

Agile has automated micromanagement for management ...


Hmm, you might have a point there. Still, it's just a hammer. It's blameless. I guess these management fads just make it easier for lousy managers to do damage?


Exactly. I would even say that hammer is a too nice of a word for what I would call a whip. Or even a rope. Lousy management gives you a rope which you can hang yourself with. That way they don't have to bother to get their hands dirty.

For the life of me I couldn't really understand why 9 months ago when a new CEO came to our company and first thing he said "measure everything, every team should have public OKRs", everyone from my team jumped on that OKR/"metrics" train and was hyped to have it. I was the only one against it, but that didn't make any difference.

Two quarters later it was pretty obvious nobody gave a shit about those (we had much more important stuff to do) but our manager had to apologize to upper management quarterly for "poor performance" of our team. And informally, nobody was taking any of those OKRs seriously, and colleagues thought of it as a bad joke.

I have no idea why the same team members didn't voice their opinion immediately and shot that idea down while it didn't take root yet.

Now half of the company is getting laid off behind a thinly veiled excuse of a RTO (from a company which was remote first since the beginning, and much longer than covid years) and everyone is a surprised pikachu. I'm not saying this is solely because of the OKRs, there were more red flags besides that one. I'm actually surprised that very few saw it coming. That the CEO is trying to emulate Musk and do a Twitter speedrun, but with someone else's money. And blame engineers who actually made the company as known as it is.


I’m reminded of the philosophy of science, I think. Math in general is very powerful and effective at modeling the phenomena of the universe. But it’s generally not enough to simply show math notation to demonstrate knowledge nor come up with scientific hypotheses. Proofs can’t be written solely in the language of the system. There’s got to be some base intuition to a hypothesis stemming from some prior experience of the phenomenon under consideration.

(Yes, much of the standard model and modern computational science muddies this point. That’s illustrative and worth contemplation. Obviously, such are only as good as their data and calculations agree with reality.)


in academia that is divided into quantitative vs qualitative. While most of my earlier few decades of my live have being focused on quantitative part, more and more as I age do I begin to value the qualitative part more and more. Often performances of company, organizations and individuals are not measurable in quantitative ways.


Leadership and management are crafts just like engineering. Goals and KPIs are tools. Most of the time when I've experienced bad Goals or KPIs, the root problem was one of the following.

1)Lack of clear mission/direction at the top level of the organization. This is a 100% necessary condition for good goals & kpis throughout the rest of the org. Otherwise it feels like political battles and favoritism between different people or departments with conflicting goals.

2) Goals & KPIs that are not well mapped from the org's mission or roadmap down to the individuals. Now all the individuals can hit their goals but org fails, or vice versa. This feels like "doing what's right for the company is bad for my performance review."

3) Confusing health KPIs with outcomes or goals. Something like commits per day/month is a great health check, but bad goal, and the benchmark for healthy is totally context dependent. This feels like "She solves the hardest problems but the solutions are very few LOC." Or "he has the most commits because his code is so buggy".


Metrics don't tell the full story and should only be a read, not the direction. Project management and customer needs or product needs should be the driver. We are at the point now where the KPIs have the wheel.

If they were just called metrics in most companies and not Key Performance Indicators (KPI), then they would get less focus. KPIs has been a heavy consultant pushed term but they are just metrics, they should never been seen as how to create value and develop a product.

People love to say "KPI" as much as "Agile" and "velocity" nowadays, both have been horribly twisted and ruin agility and value creation over value extraction. They need to be generic again... metrics, agility, delivery and usability.

When metrics are all you measure, you end up with problems [1]

[1] https://en.wikipedia.org/wiki/Performance_indicator#Problems


Having economics degree, I figured this out of evolutionary dynamics theory -- a more sophisticated creature will outplay a less sophisticated (in this case it's an employee with creativity against a manager that just uses KPI).

Later I found out this rule was articulated in mid-XX-century in sociology -- the Goodhart's law: "a metric stops being a good metric as soon as it is used to make decisions".


While working at a big corporation we had a velocity initiative supposedly aimed to lead the company toward continuous integration.

"How long a PR stays open" was one of the KPI in a dashboard.

I said: "Be careful with that!"

People started to close PRs and reopen new PRs with the same code.

Middle managers and sometimes the person in the division that was the point of contact for the velocity initiative were asking to do that.

The script measuring this KPI was improved to look at the branch and the diff of the code. Result? people changing branch and EOL encoding in the new PR

Learnings? B and C players with questionable ethics screw companies quite rapidly.

If you do not have an outstanding company culture, KPI and aligning them with company values is futile.


Why not make the KPI mirror what you want from the employee?

Here is a book that makes that quite clear. Adam Smith published it in 1776. This is not a new idea. https://www.gutenberg.org/ebooks/3300

I remember a recent story about Ziff Davis killing old web pages. If that is KPI psychosis, the fault lies with the person who set the KPI to be average page hits, not total page hits across their web footprint.


The problem is what you really care about is the bottom line. For just-in-time manufacturing this isn't too hard as you make it and sell it at about the same time. However engineering often takes years of work before you can sell anything, and until you do you have no idea how successful you will be, except for proxy measures which might or might not actually line up with long term success. Of course there is a lot of in between.


Managers lurve to have standard KPIs that they could apply to all engineers (for the purpose of fairness of course) and then be able to rank engineers.

At the same time, they forget that not everyone, not every team is working on the same thing. So the KPI (whatever it is defined to be) is not measuring anything of value.

But a manager will never admit that they are measuring the wrong things. So engineers are stuck with poor practices. Go improve your number of commits even if is meaningless.


One of the things Amazon measures for engineers is revisions to PRs. This makes since at a very superficial level. If you're a better engineer, you make the PR right the first time.

Except what happens is either engineers game it by just canceling PRs, making the requested change and resubmitting it. Or fighting tooth and nail for everything. No I won't change the naming on this variable because then I might get out on PIP. Or even more insidious, the team just doesn't do real code reviews, they just rubber stamp each others work in a "you scratch my back, I'll scratch yours"


An Amazon engineer (also other FAANG engineers) spends an insane amount of time on gaming an adversarial system.

If it were not for the unrealistic anti-competitive moats of these companies, these management practices would have burned them to the ground. Startups that have copied these practices without giving FAANG compensation have faded away in history.


> able to rank engineers

It's mind-boggling that people who are specifically selected for intelligence are subsequently assumed to be complete idiots. This is why your business depends crucially on one bit of software that only one person actually understands.


If a company wants to rank engineers then there is always a metric to rank engineers. That metric may simply be how much the manager or director likes each engineer. That's still a metric and it still can be gamed (ie: classic kiss ass culture).


Any ranking system produces undesired, pathological behaviors in the company. Then, a large amount of management needs to be hired to contain the pathological behaviors created by management in the first place.

But hey, its not my company. If they want employees to spend time on BS, I'll play along as long as I keep getting my money. I don't care for the product, services or the company at that point.


Not having a ranking/promotion/PIP system also eventually creates pathological behaviors in the company baring temporary situations of near infinite revenue unrelated to what employees do.


A promotion system is important? Yes

But ranking and PIP? I assure you you are not incentivizing collaboration. You can try to force collaboration but most employees will do just enough because it is in their best interest if someone else fails.

If there were a system design interview where you had to design a system in which employees throw others under the bus - but subtly - what would you design?


>PIP

Have you ever talked to an engineer that was forced to work with someone on their team who providing negative value? That doesn't drive collaboration, that drives low morale and eventually people moving on. If you need to identify low performers then you now have an metric explicit or implicit that identifies them.


> Have you ever talked to an engineer that was forced to work with someone on their team who providing negative value? That doesn't drive collaboration, that drives low morale and eventually people moving on. If you need to identify low performers then you now have an metric explicit or implicit that identifies them.

Have you ever worked on a team where people are yanked every 6 months systematically? No one on such teams wants to collaborate. It is in an individual's best interest to let others sink. Product, services, outcomes be damned.


The problem with capitalism is it works too well: a company doing well can subsidise a lot of bad management practice and still do great. And because it is doing great they believe what they are doing is great. But it is largely a snowball effect. Talking about larger companies more than startups here.


This is a difficult topic especially when it comes to public finance.

An example: I live in Czechia and our universities are mostly publicly funded. In an ideal scenario resources should be allocated to achieve some noble but vague goal - to have a have population as well educated as possible, while also doing it efficiently in terms of money and time. But how should we divide the money available between different universities to achieve this goal?

One of the most important factors currently used is the number of students studying at each university. But this indicator leads to weird behavior. The universities start to favor quantity over quality. They might, for example make the exams easier to keep weaker students around. Or they might admit more students than teachers can reasonably handle.

You could choose a different metric and solve this issue, but most likely some other damaging behavior will replace the previous one. At the same time, relying on intuition might not play well with the current push for governmental transparency. It would require an enlightened incorruptible leader to make the decisions. And the public might demand objectivity.

It's a difficult problem.


I worked at a place where I thought OKRs and KPIs were a big problem, a much bigger problem than this article describes.

This was a startup that was struggling to find product-market fit. Things were generally disorganized and there wasn't a lot of discipline. I couldn't get the data scientists to use the same version of Python or use version control in a disciplined way. The engineering manager told me about a large number of practices that we allegedly used in our code and also told me with did code reviews but when I got involved with that code I found those practices were not being applied and I can't imagine we were really doing code reviews or we would have known.

The most challenging thing I noticed was that we were doing a lot of zigging and zagging to keep up with our enterprise customers who had very different projects. Some of that was intrinsic to what we were doing but it also meant we'd spend a lot of time developing something and then abandon the work which had us spinning our wheels. What we did need, in my mind, was the discipline to deal with that situation, when I am working on my own things like that don't bother me much because I take good news and usually do a good job of restarting projects that are interrupted but my team was not so good at it.

My impression was that the investors really liked the CEO and believed in his vision but didn't trust him 100% so they released the money in dribs and drabs which didn't really help.

We had some consultants tell us all to do OKRs and we were all expected to make up 10-20 quantifiable goals and it struck me (1) a personal distraction, (2) a game that a narcissist can play to make management think his glass is half full and yours is half empty, (3) a distraction for management from "the goal" of finding product-market fit.


Newest KPI is number of KPIs per month (lower is better). Lets circle back to that discussion later in our OKR chat this Friday


Everything about KPIs in this blog, Goodhart's law, the comments... it's all true. That said, I'm not sure KPIs themselves are the centre of this problem.

A national olympic team (or any competitive sports team) using KPIs probably would not experience these problems. They might get over-exuberant and over optimise slightly. Athletes might game the system slightly. But, it's unlikely that a professional sports team degenerates to levels of KPI affliction like an average corporation.

No matter the corruption, olympic teams are made up of people who really want to win medals. Athletes, Coaches. Dieticians. Managers. Administrators. Etc. They're not angels, they're just not as focused on appeasing their boss, managing up, and such.

Group size plays a role. C-level distortion fields play a role. The orgs age. etc. Legibility. "Market" feedback loops. It all ads up to an organisation that can take any management tool and make it pathological.

A commercial example of resilience to bullshit is a classic factory, that ideal "Firm" we based out economic models on. "The job" of manufacturing objects is legible, can be effectively reduced to component parts, measured and optimised. The feedback loop any significant successes and failures is effective.

A young startup with a "lets make it work together and we'll all be rich" vibe can probably use KPIs effectively. There are lots of example, and we intuitively know how this is going to go.

I think it's time to look at some microeconomic questions with the reverse of conventional logic. Why does the financial services sector have sop many employees? Not because of the marginal profit value of the least employee. Because of the revenue. A company with $Xbn and a fat profit margin is (often) going to employ lots of people because it can. At this point, all you need is to lower efficiency to achieve the required output.


The sports team is an interesting case. Many fans (I know the details for MLB, being a fan myself, but the same is true of other sports) feel, essentially, that teams have optimized the wrong KPI: pressure to win has made them emphasize pitching to the point where there is very nearly a pitching change every inning in most games. Meanwhile offense values home run hitters more than ever. The result is more competitive teams but less entertaining games: much of the drama of the game is close plays on fielded balls, while managers, scouts, players, and back office analysts are doing whatever they can to avoid plays like that.

Yes, the team can make the playoffs, but ultimately baseball is an entertainment product -- the goal should be for the games to be fun to watch!


> Now your course of action will not be optimization of server-side processing, but something else (e.g. deploying in that region or on-premise for those customers).

And to empower your engineers to achieve that, you should

- Train them so they gain specific knowledge on whatever tech they are going to use. Most likely one of the big three cloud providers. Some companies avoid this step because they want to invest the time and money required. Often with severe consequences when engineers deliver a botched solution.

- Make it a realistic goal e.g. you may need to create a new team with a smaller scope, so those people are efficient and not distracted by other stuff. If Mary from accounts is pinging your engineers about some legacy system that keeps failing, do not be shocked when "your estimates were wildly inaccurate".


I wish CEOs would stop flying long haul just so they stop having the time to read the management book du jour and inflict this silly things on the entire company.


It appears to me this is another case of the slow evolution of European businesses and management practices.

There exists a practice called OKR. Intel has been using it since the early 70s, Google since year 2, and pretty much everyone in SV has switched over since around the time that John Doerr book came out. It’s designed to solve exactly all the problems of KPI. You shouldn’t spend much time figuring out what to measure and whether you are measuring the right thing or how to put what you measured into context. You should be defining simple objectives and key results what will give you a binary answer at regular intervals. Practicing KPI solely is putting the cart in front of the horse. You should know your objectives and milestones before you figure out how to measure them.


This "just use OKRs" comment is funny when contrasted with another comment that described how OKRs were "gamed" at his company: https://news.ycombinator.com/item?id=37221787


It’s even funnier when it’s apparent that company isn’t really doing OKR. First of all, the Os are supposed to be somewhat resistant to change in direction. Second of all, you shouldn’t be doing KPI on top of OKR. If you set the OKRs right, you’ll set stretch goals so ambitious, even if you game the completion percentage, the simple answer to whether you are doing the right things or performing should still be immediately answerable.


  > set stretch goals so ambitious, even if you game the completion percentage, the simple answer to whether you are doing the right things or performing should still be immediately answerable.
any good examples of this in practice?


The comic in the article is the literal purpose of a KPI. The comic:

"John your commit frequency dropped 25%, are you quiet quitting?" "Sorry boss, I was on sick leave. Next time I'll be more active."

If we had the next box of the comic it would say:

"You're good John, you don't have to be more active, and I'm glad you are feeling better. If you need anything let me know."

The KPI is a leading indicator of something potentially going wrong (or right!) and a signal for management to look deeper. A good manager would connect with the human-being behind the metric, get the full picture, then help.

If management is treating a KPI as a source of truth and ignoring the complex people behind them, then it's a manager problem.

KPIs are not destroying business, bad managers are.


I really think their cartoon illustrated the point of the article quite well. For those visually impaired or who didn't read the article it was a two panel saying:

---

Boss: John, your commit frequency dropped 25% last week. Are you quiet quitting?

John: Sorry boss, I was on sick leave. I'll be more active.

---

Before you cry "this is a contrived example!" Look at what Amazon just did. They sent out an email threatening folks who weren't in the office 3 times a week in the previous two weeks. I did not take into account sick leave, vacation, FMLA, corporate travel, anything.

You take a vacation? You and your manager got a threatening email about your KPI you're not meeting.


I know multiple people who have worked at companies where KPIs were not adjusted for paid time off... 2 week vacation means your numbers are going to tank, which will reduce bonuses and opportunities for promotion.


I had a friend that worked at deloitte and that's exactly how they managed it. You had to bill so many hours a month and if you took time off, you had to make it up when you got back (or before you left).

He was absolutely miserable there.


KPIs aren't the issue; it's how businesses use them. Think of it like flying a plane without gauges—it's risky and uncertain. The risk of stalling or getting lost increases. Good KPIs are like helpful plane instruments. Some may seem unnecessary, but the focus should be on the essential ones. KPIs are just measurements. The problem is businesses not being explicit in their direction and selecting the right measurements to reach those goals. You rely on measurements every day, don't stop now.


I think KPIs are (one of) the issue(s) .. They will always fail because of Goodhart's Law .. Measurements are fine, but good luck finding any leadership that won't turn them into targets


This really is just "Goodhart's Law" in action. We have a saying "all models are wrong" but I think we also need to make the phrase "all metrics are wrong" more common. Metrics are simply guides. It is also why we say that all rules are mean to be broken. If you rely too heavily on a metric you will only get yourself burned. Unfortunately no matter how good our algorithms are, they won't be any more than a guide. You still unfortunately have to use your brain.


Is time-to-last-byte really defined as the moment the initial response has been successfully received? Or is it defined more vaguely as "all responses needed to render the page have been received "?

I'd understand "time-to-first-byte" as something that could at least give you a reliable lower bound for how long users have to wait, but I don't see how "time the initial response has been received" is any kind of meaningful indicator of UX in modern javascript-driven websites.


Hard disagree. KPIs are invaluable for any modern tech business because they provide clarity on what the organization's goals are, and how to achieve them.

Let's say you have a goal to increase revenue by 50%. You come up with a plan, and assign different sub-goals to departments like Marketing, Sales, etc. Each person then gets a sub-goal down to the lowest levels of the org.

KPIs provide incredible org-wide alignment which no other management technique before, or after, them produces nearly as well.


I agree completely that discussing the size of KPIs without considering the company's strategy and its Balanced Scorecard (BSC) is meaningless. KPIs cannot be isolated metrics; they must be directly related to overall goals and strategic directions of the company. Without understanding what specific aspects of the business and tasks these indicators reflect, discussing their size becomes a futile activity.

Balanced Scorecards provide a holistic view of how different aspects of a company's activities relate to each other and to the overall strategy. They help identify causal relationships between different indicators and goals. It's important for leaders not only to define KPIs but also to clearly explain the logic behind their formation to KPI owners, and to demonstrate how these metrics fit into the context of the strategy and the BSC.

Unfortunately, a lack of information and understanding of this logic can lead to misinterpreting KPIs and, consequently, to misaligned behavior or a focus on irrelevant metrics. Therefore, it's crucial for leadership to actively educate KPI owners and help them see the bigger picture — how their work contributes to the achievement of the company's overall objectives.


One of the best articles on this subject - Goodhart's Law Isn't as Useful as You Might Think by Common Cog. Good overview of Goodhart's effects and particulary effective practice of fighting it at Amazon from "Working Backwards" book.

https://commoncog.com/goodharts-law-not-useful/


I have a more pessimistic take on this KPI stuff.

You absolutely can get great outcomes without putting KPIs or metrics on a pedestal. And for sure, metrics should work for the employees; employees shouldn’t work for the metrics.

The problem is that this kind of falls apart in large organizations because, from the perspective of CEOs, execs, and upper management, leaving wiggle room requires placing a high degree of trust in line teams or unscaleable micromanagement. And you can argue that the high degree of trust should be given, but bluntly, unless your employees are very well coordinated and your management team is really on top of their game, a lot of teams will flounder without clear goals.

Now as an IC that sounds like bullshit, but let me ask you: have you ever seen a team that seems like they basically didn’t do anything? What did that team have in commmon? Usually it’s at least two of 1. a lack of clear goals 2. low to average levels of motivation and personal ownership 3. low to average levels of skill. If you’re on this website discussing software and KPIs in your free time you’re probably above average in motivation and skill. But I’m sure you know that without clear goals, people just working to put food on the table tend not to have the business interest and drive to create actually-valuable work.

If you can keep your business small enough that pockets of low-productivity can’t fly under the radar, or somehow hire only motivated, skilled people that stay motivated and skilled years after they’re hired, you probably do need thing like KPIs to hold teams accountable. Not your team necessarily, but if say 10-20% of teams will flounder without KPIs, it’s probably just good policy to have it, even when it annoys the other 80% of teams, as long as you don’t add pathological processes around the KPIs.


I like this quip, tho QI says it is due to William Bruce Cameron:

“Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.”

https://quoteinvestigator.com/2010/05/26/everything-counts-e...


I would add a root cause to this:

KPIs are fine when treated as health indicators. You can deliver a feature, and then watch over a good period of time whether the health improves. This "good period of time" can be months, or for something that's intuitive – e.g. you're launching a feature to be on parity or enter a new market or catch up with a competitor – this can take YEARS for your customers to understand and move closer to your product.

What backfires is, using KPIs as quarterly goals. Imagine this, you launch a feature and expect within the quarter that this feature is going to grab X% of your competitor market share (beautifully expressed as "increases X business by Y%"). This is where it starts to fall apart. Things that are human intuitive for the product (common sense market features, foundational work, new market launches) – will never get prioritized because they won't move a number in that quarter.

Using KPIs as development cycle goals are destroying businesses. KPIs are fine as health indicators.

The problem is, using KPIs as health indicators – works only with the kind of environment where: Teams after healthy discussions can trust and commit to delivering features – and somebody takes accountability for the KPIs asynchronously behind the scenes. Where a shepherd steps up and says, "I approved the features last year, I take responsibility if they all didn't change our trajectory, some did, some didn't". Today's adoption of KPIs is an "empowerment" trend, where managers figured out – teams are delivering features and I'm answering KPIs, why do this roundabout, let me just pass on the entire KPIs to the team, problem solved! "I forwarded the KPIs to my team and they committed to doing these features, I did my job, the team is not performing". The path to hell is paved with good intentions, but there are also nefarious habits.


“When a metric becomes a target it fails to be a good metric”

https://www.mrowe.co.za/blog/2019/11/when-a-metric-becomes-a...


KPIs are a tool and like every tool they can be misused by unskilled practitioners.

HR KPI — hire 20 engineers: you can hire 20 morons.

Sales KPI — close 100k of business: you sold the wrong use case and all of those customers churn

Eng KPI — reduce bug frequency: you make reporting bugs too onerous to do

Bad execution isn’t fixed by KPIs, it’s fixed by good execution. Good execution benefits from KPIs.


Leaving this not-so-humorous-post https://medium.com/@NTDF9/if-soccer-managers-did-performance...

Full disclosure, it is my own rant about performance management.


Seeing Like A State is the most important business book.


The forestry allegory is so relevant to what we go through in enterprise data. I recommend this book all the time to understand the motivations behind it.


Legibility even at the cost of the things you want!


I like the concept of kpi psychosis but his solve for it is not helpful IMO. Its not actionable. “Use intuition” On the macro level it’s about incentives and leadership ( on the org level). If workers incentives are based on (faulty) KPIs there is going to be a perverse incentive even if members within the org know it is not optimal. On the micro level it’s about gathering other forms of data, basically qualitative data to inform the behavior that is observed in the KPIs or quantitative metrics. This could be accomplished with user research or speaking to those who have direct interaction with users - sales and support people.


KPIs without judgment are bad because you can’t model everything. Judgment without measurement is also bad because we all have human bias. It’s a matter of where you lean first. (Do you check your model, or check your judgment?)


KPIs are very useful: if you find yourself working for an organization that begins to them, they indicate that you should leave. If you interview at an organization that uses them, you know not to take a job there.


This is a corollary of the phenomenon I've dubbed "spreadsheet reality". I have noticed that ever since the advent of the spreadsheet the numbers in the pivot table are more real to decision makers than what's going on outside their office door. It's just so easy to tweak a few numbers in a cell in VisiCalc or its descendents to get the results you're looking for on the bottom line. So what if that means destroying the lives of a few thousand (now former) employees?

The development of the spreadsheet was the beginning of the end.


People will find ways to game any system to maximize their score for what is being measured, particularly when there are attractive rewards for high scores.

This leads to perverse incentives where what the organization's leadership really wanted is set aside in favor of maximizing scores by whatever means necessary:

https://courses.cs.duke.edu/fall01/cps108/code/jpixmap/image...


KPIs can be used to measure and direct attention to problems or successes. KPIs can also be used to drive change. KPIs are rarely used for much more than vanity metrics. The article describes the situation nicely.

Like any system, KPIs can be tweaked and made to work for you, to accomplish your goals. The key is to start by asking why you want the metric.

Sometimes the answer “to appease management” is the right answer. Marketing does matter! More often though you can come up with a better goal for your metrics and it will make your team and company more successful.


The level of certainty you need to have that a measure captures all and only what is important varies with how much weight is given to it. When it becomes a key and decisive factor in evaluating individual and/or unit performance, tied to reward and punishment, the required certainty is near unity, because you are very strongly creating an incentive structure which will exhibit Goodhart's Law if it isn't perfectly aligned with your actual intent.

Things that make tolerable broad status indicators make horrible KPIs for that reason.


I think KPI on itself is fine, it's basically the goal of the game you are playing, defining it clearly should help. However, it never make sense to to have multiple KPIs, everything should be derived from a single KPI.

This actually give KPI some weight, because it literally defines what you do. Whenever you define you KPI, you'd think, "okay, is this all that I'm doing now?". And whenever you change your KPI, you'd be thinking, "okay, should I abandon the last game I played and play a new one?".


> KPIs should be used in combination with human intuition

And that's where everything goes from wibbly-wobbly to "I've fallen and I can't get up!"

What is "human intuition?" Do we all have it? Do we have the same "it" or is this just something we accuse other people of not having but that we can't quite define, like "common sense?"

This statement may as well be "ideally, we should use KPIs with ample hand-waving, for some unnamed value of 'ample.'"


Human intuition is based on raw data - photons, pressure waves and so on. Extracting meaning from this raw data is complex. Maybe, one day a machine will be able to look at various performance metrics (time to last byte, load times, commits per day, etc) and give us a concrete plan of action. However, we don’t have that today so should probably learn to trust the associative cortices since it is the best machine that we currently have.


These musing miss the key failing of KPI, Goodhart's Law, "When a measure becomes a target, it ceases to be a good measure".

There are multiple reasons why Goodhart's law tends to be true, but the most obvious one is as soon as people understand X is important, they do or inflate X. If they successful do X, it's often at the cost of other equally or more important but untracked activities, if they inflate it, it's obvious just as bad.


I was hired at a pretty big startup because I came from a small startup where I had delivered close to 100% uptime.

I pretty quickly realized that their outages were due to their KPIs being based on jira and GitHub statistics only.

The hilarious thing was that after 6 months they started to complain about my GitHub stats. And everytime I asked why they are not tracking downtime since that is why they hired me in the first place. And they basically said it was too hard to track.


KPI/OKR stop working once workers starting to optimize for KPIs, ignoring stuff like quality of code or the mood in the team (is it supportive or is it till burnout?). Especially cool when companies put generalised goals, and the product development has different goals. For example, if we wanna improve the loading time of the site overall, we could just completely ban the slowest clients (problem "solved", KPI improved).


This is why experienced managers should consider, “what’s a way we might superficially achieve this while yielding neutral or negative actual results?” and then set up more KPIs to detect that happening.

It’ll never be perfect, but it’s probably better than nothing. Company-destructive assholes and metric-gamers can run their playbooks with or without KPIs or OKR:


Who has said metric-gamers would be so obvious? "Add more KPIs" - yea, only benefits the metric-gamers actually. The regular workers would be more annoyed and distracted. Do we focus on the sprint and tasks in it? Or do we focus on 10 new KPIs?


Well if you want to just complete tasks then go ahead and do that. If you want your business to succeed it’s pretty important that the task-doing is producing positive results for the business overall (which is what KPIs imperfectly try to measure).

“Ha, you want a brake and a steering wheel? Those are annoying and distracting, do we push the gas pedal or not?”

Yeah it’s complicated but all these components are necessary.


Sadly I've been in many organizations that have a kind of "vibes for me, data for thee" approach to things. Meaning, "data drives decisions" is the official philosophy ... until it isn't.

It takes a certain amount of rigor and product maturity to make metric-based decisions work — and some types of applications are better suited to this than others.


The title HN chose is puzzling. It's not at all the title of the article, which is far less negative about KPIs: "How to avoid KPI psychosis in your organization?". And, in fact, the article talks about how to use KPIs with common sense, and never claims they are destroying businesses.

I wonder what's the reason for this clickbait title?


KPI's help PM's stay relevant easily by having them talk sht about KPI's that no one understands but them.


Wow, the new title is so much clearer than the business speak that was before it. Thank you, HN.


For the handful of dullards like me: KPI = Key Performance Indicator https://en.wikipedia.org/wiki/Performance_indicator


Business strategy -> objectives -> KPIs. Don't start at the wrong end.


Original Title: How to avoid KPI psychosis in your organization?


KPIs have always been there. They just had before another name. Businesses are still alive.


Mind you, the article never states that "KPIs are destroying businesses". That's just the clickbait title HN for some reason chose for TFA, not the actual title.


Another hard question is how do measure people performance since kpi is not the best tool ?


isn't it of a bad KPI that's destroying business? for KLOC committed and sick leave for example, isn't it suppose to be "KLOC per active days" that fix the issue?


KPIs are indicators, not measurements.


please don't write an entire article with even saying what the initial stands for:

Key Performance Indicators


This is a great article. I've consulted with many startups that are really scared of OKRs and KPIs and have some PTSD around them, but they need not be scary!

Here is a scenario:

It's January and Sales is getting feedback from customers that if the business has a managed k8s offering, they would not only buy it but expand their use of the platform generally. Sales passes this information to product and product does some research, finding that indeed there would be a market for a managed k8s offering. They pull together a report and start to shop it around to get feedback and buy in from around the company, consulting with different orgs. After a while, a business case is flushed out and the management green light an MVP k8s offering that will generate revenue by the end of the year. An objective and key result for the business becomes: this year launch a managed k8s offering so a small number of customers with an eye for $50,000 MRR.

Every org should then have some performance indicators that indicate that the business is tracking towards the release of this product this year, it doesn't not need to be complex just as the OKR should not and does not need to be complex, it's simply an indicator that we're on track (and it's ok if you're not, maybe the OKR was unrealistic, that's ok!)

Eng: KPI: stand up k8s - KPI: make k8s stable - KPI: make k8s more stable - KPI: tie k8s into the rest of the stack - etc

Product: KPI: Research how k8s should be be exposed - KPI: Research how k8s service should be billed - KPI: MVP product design and test internally - etc

Support: KPI: understand staff to support a new offering - KPI: Training and hiring for new staff to support the offering - etc

Marketing: KPI: understand the message to the market - KPI: understand how competitors are marketing their k8s offering - KPI: understand how to advertise new offering - etc

Etc - etc.

OKRs and KPIs do not have to be a whole song and dance, they are simply a way to keep a company of organizations aligned and moving in the same direction regarding agreed upon objectives for the business, folks get super hung up on granularity. Doing this well helps tie together some of the most important aspects of a healthy and successful go to market: Vision, Mission and Purpose. I understand the vision of the business, the things we will make true in the future. I understand the mission of my team and organization on how we will realize this vision. I understand my purpose within the vision and mission and why I am an important component.

I firmly believe it's only with this as part of your foundation can achieve employee satisfaction, have directionally forward teams, and become a "high performing business".


None of the presented KPIs are quantifiable which misses entire the point of the KPI.

Also its unclear how the team KPIs tie into overall Objective (Key Results are missing here). The way its structured, every team could meet their KPIs and the company could still miss the objective. Eng can build a stable k8s, product could research all the things they want, support could understand everything, marketing could understand everything. If customers don't buy the product, the objective has been missed

I don't say this to pick on you but these examples are typical of what i've seen in practice when orgs roll this out. They end up not really groking the idea behind OKR, and slap the label on a process that is largely similar to what they had before but with a trendy new name. The net result isn't better business outcomes but instead a lot of toil, infighting about why OKRs were missed, gamed KPIs, vague and unhelpful "targets", and a distaste for the whole process.


Well I've taken 3 business public/sold to public company this way and brought countless to profitability so who knows I guess?


Look I'm not trying to make this an attack on you. My generous read is that you just chose some poor examples. But teams can and do succeed despite bad management techniques like poorly implemented OKRs all the time.

If the goal is to build a managed k8s service, a team of talented people can build it without the need for OKRs.

Coming from an SRE backround I see a better OKR breakdown on the Engineering side as follows:

* Objective: A stable Managed service that the business can sell * Key Result: - A monthly availability of 99.99% * Key Result: - Monthly P90 API latency below 500ms * Key Result: - a new instance of the service serving customers in the new region by the end of the quarter.

(Yes, these are really just SLOs, they're a good fit for this)

I know the objective rolls up to the business goal of a revenue target. I know that if i'm meeting my own goal buy the metrics we can measure from the service. I know how to prioritize engineering work. If one target is being missed consistently, the fixes are going to be prioritized At review time, I can point to improvement work that fixed/improved the results an know the business thinks its important.

Give me some vague objective and unmeasurable of researching and understanding something? I'll BS my way through filling out the form and go back to improving the service


I think I've quit a few companies you consulted for. Your KPI examples are terribly vague.


Well, I was just giving an example off the top of my head, I agree it's imperfect. However, I think the article is about company wide KPIs for orgs, they don't need to be particularly granular. Team level KPIs might be more granular - for example k8s stability might have some uptime associated with it, but even then, being dogmatic with granular KPIs is exactly what I agree with: it will without fail destroy businesses.


> Marketing: KPI: understand the message to the market - KPI: understand how competitors are marketing their k8s offering - KPI: understand how to advertise new offering - etc

These are objectives, not performance indicators. Please do not redefine KPI to roughly have the same precision as "cloud" or "engagement".


This is a perfectly reasonable performance indicator for a marketing manager that ties up to a company wide OKR. If you can understand the message to the market in a prescribed time you're tracking towards an OKR.

Please don't redefine KPIs to trend with your experience at small business that will never scale to public ones. Your comment is exactly why business fail.


> Please don't redefine KPIs to trend with your experience at small business that will never scale to public ones. Your comment is exactly why business fail.

Here are some marketing KPIs:

- New customer acquisition

- Customer acquisition cost (CAC)

- Average deal size

You'll notice that "did some event" is not a KPI nor is "learned about the competition" - at least at most companies I've worked for, started or been employed by.

You've been a part of very good teams, and held some important positions. So have a lot of people on HN,and it really undermines what you are trying to communicate when you throw around "you don't scale" or "you are small business". You should assume you are among peers here, and argue on the merits, not by throwing shade.


i still don't understand what KPIs are, and now i have no clue what OKRs are. thanks a lot.

why do so many people think that acronyms used in their bubble will be understood more widely. take IC for instance.


KPIs are useful in context - to echo other valid points given thus far. In a large scale business plan or proposal typically an understanding of what is to be performed can be measured. SLAs for uptime for you cloud dwellers. Are SLA metrics everything?

Also, when KPIs were hit in a time frame, who talks about 20% of certain teams not being there anymore? I do agree past performance is not a guarantee. It’s kind of a dirty secret that yes this sales document is truthful but it represents conditions not current.

What is really destroying business in US corporations is woeful understaffing in “gets things done” positions. Mostly due to Wall St metrics. Book the sale then revenue and bonus…then get out the kitty litter. It’s been a race to the bottom for 10 years in professional services and lower tier positions by the dozen.

Of course the new fad of “AI will make this more efficient with fewer people!” appeals more than “wow we need bodies to move figurative data boxes around…why don’t they want shit wages limited healthcare and terrible hours?” when talking to investors.

This is simply a problem of engineering capital to Human Resources but the invisible hand has been cuffed for so long there really is no middle class anymore. The rich don’t spend. The poor spend all they have and because milk is fucking $5 a gallon, gas is $3 here in Texas, and bills ain’t getting cheaper during 112 degree days - wanna know why the economy sucks?

When in lockdown basic income to not die or be evicted didn’t ruin society. Now that we’re out of the pandemic it’s forgotten. I’m sure lobbyists will somehow change the lesson to be “we need to buy guns from companies and give them out free that will solve poverty” and Ted Cruz will be right there eating bacon off their…

Businesses are self destructing. Not my monkey not my circus. Cassandra out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: