Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Paying Developers Like Sales People (hubspot.com)
23 points by bokonist on Feb 9, 2010 | hide | past | favorite | 27 comments


This is a well intentioned idea with horrible consequences.

Suppose 3 UI changes go out. How do you decide how much each increased users? Remember that money and ego are on the line.

Perhaps one was trying to drop support calls and the other increased users. Users went up, support calls stayed the same, and the team trying to drop support calls claims that they would have succeeded had the number of users not gone up. How do you resolve that?

You thought that developers saw QA as a roadblock before? Wait until they see QA keeping them from the check they think they are going to get!

And what about the developer with a half-baked UI change to improve conversion which is going to suck. How do you gently break the news that you're not even going to let the developer try to get that dangled reward?

I see no end of the potential for conflict. If I worked in an organization that announced a policy like this, I'd start shopping my resume around. Because I can see the pain coming and would like to avoid it.


Suppose 3 UI changes go out. How do you decide how much each increased users?

Measure the change in usage for each screen or section of the app that each team worked on.

Perhaps one was trying to drop support calls and the other increased users.

Measure support calls per user (which is the right way to measure it).

You thought that developers saw QA as a roadblock before? Wait until they see QA keeping them from the check they think they are going to get

This has never been a problem for us.

How do you gently break the news that you're not even going to let the developer try to get that dangled reward.

These kind of tests can be easily randomized. If product management really thought the UI was a bad idea, they could keep the first test size really small and then only increase the sample if the design proved itself.

If I worked in an organization that announced a policy like this, I'd start shopping my resume around.

We would definitely only do it for one small part of the team that volunteered for the experiment.

Overall, any kind of performance based pay depends on the metric. If the metric is robust, the scheme can succeed. If the metric is not robust, the whole thing fails. Will the metrics I proposed be robust? I can answer your concerns and attempt to make them more robust, but I don't know if they would be robust enough. I think it would be interesting to run the experiment, and find out for real.


Let's see. Your response to the 3 UI changes question is Measure the change in usage for each screen or section of the app that each team worked on. From that I conclude that you've probably never done A/B testing and discovered how difficult those deltas are to accurately measure, or discovered how changes upstream waterfall down through your site. The amount of data you need to narrow down performance changes to within, say, 5% is much more than most people realize. And from a business view you don't want to do that for the simple reason that once you have a demonstrated improvement, continuing the measurement to figure out the exact win is costing you real money.

If, as with several of my employers, people take a long time from engaging with your website to making money for you, the tracking problems get MUCH more challenging. When you add complex social dynamics on top of it (such as person A getting person B to make a purchase), you're in for a world of fun. (Yes, I've worked on a site that had to solve exactly this problem.)

Measuring support calls per user is also not as simple as it looks. Different parts of your site generate support calls at different rates. And it isn't always obvious what part of the site generated the call. This gets worse if you have the long lead time issue that I've faced multiple times.

As for whether QA is viewed as a roadblock, that depends strongly on team dynamics. But in places I've seen that had that bad mindset, things can get very, very bad.

On the randomizing tests, sure you can limit how many users are shown the option. But when someone makes changes that introduce technical debt on the back end, that debt and resulting bugs can't be segregated out so easily. Sometimes you really just need to say no to features. Anything that limits your freedom to do that is a Bad Thing.

Now don't get me wrong. I like the idea of performance based pay. My current employer (Google) is all about performance pay. But you have to structure the incentives right. Giving people pay after the fact based on feedback from people around you creates incentives to not just optimize a flawed metric, but to work together as a team and get stuff done.

This is important enough that while I'd read with interest what the experience of this experiment was like, I'd personally not want to be involved in anything like it until I'd been convinced that it really works like you hope it will.


From that I conclude that you've probably never done A/B testing and discovered how difficult those deltas are to accurately measure, or discovered how changes upstream waterfall down through your site

We've done it before. We've launched new screens, measured the usage, then done a bunch of changes to the screen and seen usage go up. We also have a lot of good data on which screens generate the support call, data on the conversions of all the different screens, etc.

But again, your points are all good. We have pretty good data. But it's nowhere near as simple and clear as the same type of measurement for a sales person. Would the data be good enough to base pay on? I don't know.


Sadly, developers also have a history of being able to game the system a lot better than sales as well, partially because of the measurement problems pointed out in the article, and such a system usually ends in failure.

Also, the bounty scheme has largely failed in the open source world, perhaps because they bounties never saw $450,000, and I personally doubt the model could be effectively applied to an even smaller team.

Sure, many people would like to have more money, but sadly there is a lot of devwork that is of little 'financial value' that has to be done and that would surely suffer with such a system in place, driving the overall quality lower as it doesn't create a sense of improving the product, instead pushing for being awarded the largest or best bounty.


It would be very easy to accumulate a lot of technical debt (hacked up code) very quickly.

Also, it would be frustrating for developers being paid performance pay but not totally in control of their environment. Bitter infighting might occur about language and technology choices, server designs, even dev machine quality or working hours.

However, I do like the idea of proposing a general goal and asking developers to come up with their own ways to meet those goals.


Perhaps you could solve the technical debt issue with delayed payment. 50% now, 50% 2 months later - 10% per bug still in the original code that has not yet been fixed.


That's a lot of complexity. Pay structures for sales people tend to be reducible to a couple tables or a formula.

There's also a more subtle difference: a sale is a discrete event, so there is a known point at which we can determine the goals have been fulfilled. Software is a living thing -- its quality can only be inferred with time. What about bugs? Flexibility to adapt to changing business requirements. Ie: if I anticipate that we will need to change out component x so i don't tightly bind to it, how do I get compensated for my foresight, when this adds a minimal amount of overhead to my development practice?

By incentivizing the completion of objectives, you are asking for the buildup of technical debt. The objectives of a salesperson (not customer relationship manager or marketer) are necessarily short-term. The objective of a healthy development organization (as I see it) is to consistently produce high-quality software that meets business objectives in a predictable and transparent way.

Programmer productivity structures almost always have or create the development goal of "meet short term objectives quickly", which is a kiss of death for long-term growth and programmer satisfaction once you have to deal with the organic mess all of the shortcuts have created.


What problem is this trying to solve?

I know it's not perfect, but doesn't the market end up doing this pretty well, especially when you throw contractors and consultants into the mix? It's perhaps not as scientific and certainly not optimal for each individual case, but in the aggregate, it would seem that the rate that developers and salespeople get paid is probably pretty fair.

The big problem is a lack of pricing transparency in the labor market.


1) Management and the product team often have some idea of how valuable a certain task is. The dev team is responsible for how long it will take. Often we will invest too much time in something that's not that valuable, since management is not aware of just how long it will take. Or we will under invest in something very valuable because management doesn't realize how easy it is to knock off. If management can just put a price on the stuff it wants done, then I think developers will be able to pick off the stuff that gives the most bang for the buck, which is what we want.

2) Paying for performance makes a big difference in developer productivity. That why startups get so much done - they live or die based on how hard and more importantly, how effectively they work. As we've grown from a couple developers to over twenty developers, I think our pace has slowed. I'd like to see it more like the early days.

3) The best sales people can make 2X what a developer makes. As a developer, it would be nice to be able to make that much money. But the company cannot do that right now, because it cannot accurately measure my performance.


20 developers has lower productivity for a lot more reasons than individual incentives. This has been widely recognized since The Mythical Man-Month.

Concrete data point for you that I ran across in Software Estimation by Steve McConnell. The average team of 5-8 programmers can get a medium sized project done in the same calendar time as a team of 20 programmers. As teams grow beyond 20 programmers, total throughput improves. This is from a study with project side measured in lines of code. There is every reason to believe that the larger team has more reinvention within their project, and therefore their project of equivalent size has less functionality.

The fundamental reason has to do with lines of communication. As you add a person to a team, the number of lines of communication you add is the size of the team. At some point people take up more energy from others than they contribute useful work themselves. There are a myriad of organizational techniques which try to mitigate this problem, but all suffer from the defect that they force people to use less productive forms of communication (for instance writing documents that can be read by many others), and therefore lower individual productivity.

Therefore if you've got 20 developers you should seriously consider going back to a small team, find a way to partition them into groups with minimal interactions, or scale your organization up until you have a lot more developers than that. The first two try to solve organizational problems that lead to low throughput, the last accepts the issues but lets you tackle problems that the small team couldn't.

Another random suggestion. If you're interested in productivity, pick up a copy of Peopleware and Rapid Development, read them, and pick a set of best practices to try.


We already have teams of 3-4 developers, working on separate projects with minimal interactions. If we didn't do that we would be thrashing. We're aware of the Peopleware/Mythical Man Month issue.

But still, I think we could do better. Working based on hitting external goals adds invaluable focus to a team.


1) I think this is just a symptom of bad management. Again, I'm not arguing that the market situation is optimal for individual companies, just that equilibrium is reached in the market as a whole, more or less. Sometimes this is accomplished by companies with poor management going out of business because they can't accurately judge the priorities of projects.

2) I think there's a lot of reasons that startups get more done, like just having fewer people.

3) If the best salespeople make 2x as much, it's because they're 2x as valuable. I also doubt that the best salespeople make 2x as much as the best developers, but if so, it's likely because they're twice as valuable. If you want to make that much, go do sales. And if you can't because you lack the skills, perhaps that tells you something about the value of salespeople :)


If you thought it was hard now to get a developer who switched teams to help fix bugs in their last project, wait until you put $200,000 on the table.


Developers have to fix their own bugs regardless, or they are quickly not be developers for our company any more. In this system they have a lot more incentive not to create bugs, because the time it takes to fix them will take away from earning their next bounty.


I hear what you're saying but I have also been in this situation before. There are no-fault bugs that happen (or, maybe not even bugs: maybe just someone's got the system configured badly, and the person who wrote the code is the best person to figure that out). You're not firing the developer who left a bug behind. You're just incentivizing them not to help.


And why should they? Their commitment is over. On the flip side the system should be setup in such way that a developer could quit or fall under the bus without serious implications for the business.


The $200,000 should be for documentation, then, not features.


It might be hard to sell upper management on the value of documentation vs the new feature "required" for the $1mil customer to finally purchase your system.


What happens when the feature doesn't quite work, eats the client's ten-million-dollars-worth of data, and the developer that hastily implemented that feature retired to his own island with the "bounty" money?

Oh yeah, you're fucked.


A good read on this subject (which shows why it falls down) is von Mises "Bureaucracy". It's probably available on the web somewhere.

Also, Ronald Coase on the Theory of the Firm. Strangely enough, economists have a lot of useful things to say about business ideas.

It's all about transaction and information costs. The hassle of managing and measuring programmers in this way would outweigh the benefit of keeping them working hard.


This seems like a great way to make the software industry more like the sales industry; short-term gains at the expense of long-term relationships. (Has a sales person ever made you want to have a long-term relationship, or have they wanted you to buy something "right now"?)

There is only so much time in a day, and that time can be spent in certain ways. You can implement a new foobarbaz feature, or you can write some tests to ensure that the gorchquux you wrote yesterday works correctly. The tests don't really add any value, right, because the feature is already there. Just add the next feature, because the user can see that (and now you get $200,000!)

The problem comes after repeating this cycle a few times. With no design upfront and no ongoing effort to make the codebase maintainable, eventually there will come a feature that you just can't add. Moving something out of the way will knock the whole pile over. Changing one line in the config file will require your newly-hired team of 10 testers to work overtime. You will be so encumbered by your technical debt that you can't add new features, you can't fix bugs, and you can't get the codebase into a state where you can do those things.

The only solution then is to redo everything. (Remember Netscape after the big rewrite? Neither do I...)

Lately, I have been on an "overengineering" kick, and I don't regret it at all. As the requirements change, my software can quickly adapt, without becoming less clean or maintainable, and without taking much of my time. This means that my software has lasting value instead of the usual "it solves my exact problem right now".

Before I type anything into the editor, the thought in my mind is, "how can I make this require less code and less developer attention later". Right now, I have plenty of time. Who knows what I'll be up to in a few years when the foo system needs that new feature. (Usually people plan the other way; I'll do this later, I'll have plenty of time tomorrow. Every time I've said that to myself, I've been wrong. So I've stopped planning like next time, I really will have time to do it right later. I won't, so I'm going to do it right now.)

The incentive structure outlined in the article encourages the opposite. It's already very difficult to "do things right". Paying someone money to not do things right is not going to make it any easier. The result will be software that is even worse than what we have now; and it's well-documented that most software is pretty bad. Shudder.


Actually, developers under an in-house bounty system have quite an incentive to make maintainable code. For it is very likely that the bounty for next month will be a feature added to the code they are already working on. If the developer makes the code a mess, then they will not be able to keep earning bounties and will start losing a lot of money. A developer just needs to think long term. And if you haven't hired developers that can think long term, you already have a problem.


Has a sales person ever made you want to have a long-term relationship, or have they wanted you to buy something "right now"?

I've had sales people try to make me want to have a long-term relationship. Of course they were selling themselves at the time. ;-)


This is a worthy cause, but I am skeptical. For one thing author has neglected engineering debt - it's far too easy for me to add features by sacrificing code quality or worse yet design quality (even adhering to the coding standards). There should be a way to measure engineering debt and take into account. One idea I have here is that a piece of code should have an owner that has incentive to keep code in good order, and that owner would accept money from developer who is making changes to compensate for the damage - if a change requires two week to cleanup a smart code owner would charge 2.2 weeks, thus coming out ahead and providing incentive for the developer to do a better job. But then the problem is that each piece of code becomes a monopoly with extortionist prices. There are two ways to deal with monopolies - regulate their prices or introduce competition (by direct action if needed).

The other problem is that this may get quite expensive. Why should a business pay more when it could pay less?

Nevertheless it would be interesting if somebody tried this.



"The output of a sales person is easy to measure. " .. Spoken by somebody who was never actually employed a sales rep. I've closed deals that were worked on by sales reps who had been gone for a year or two. How exactly do you measure that value?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: