Hacker Newsnew | past | comments | ask | show | jobs | submit | NSSec's commentslogin

I'd say OKR work is/should be all work. Each task should be aimed toward moving forward on the goal. It doesn't mean however, that you can directly link one specific task to the OKR. It's just part of the strategy to achieve the goals.

Let's consider a KR of 'increase active users by X%'.

* Interviewing/hiring isn't going to move the needle on getting X% more active users by itself. Having more engineering time available for new (critical) feature work might, however and you have to do interviewing to get new hires.

* Training in and of itself isn't going to move the needle either. But having a more knowledgable engineering team probably will, by being able to move faster or better. You have to do training to improve knowledge.

* Maintenance work might not move the needle on active users FORWARD (or it might, if you have a lot of bugs that prevent users from actually using your stuff) but it might prevent it from going BACKWARD.


| "Write an integration test for xyz" is not really what OKRs are designed for.

This gets it backwards. This is a task. Why are you performing this task? Especially without being tied to specific development? If you can't justify doing it with your/teams/companies stated OKRs, you actually shouldn't be doing it. Even if you feel you should because 'its the right thing'. Even looking at it this way gets it backward.

You should be looking at your OKRs and figuring out what you should do to get there. But, you might have to break it down a little further.

Indeed, a KR of 'increase monthly active users by x%' is not directly actionable by an engineer. But an engineering department can come up with its own OKRs that fit in that direction.

For instance, to achieve that goal it might be necessary to develop new features. However, if 70% of the engineering time is spent in bug-fixing then they're not going to be able to do that. (Do you know where your team spends its time? That might be worthwhile to figure out).

So, an objective of 'Increase development time for features' (or some such) might be considered. If you find you're dealing with a lot of bugs, one of the key results might be to reduce bugreports by 50%. To do that you might argue that you should add some integration tests so that you can change code and catch issues before they get to production so that in the future you'll be spending less time on bugfixing / rework, so that you can work on new features that will entice new users to full fill the company objective.


How do you measure the impact of rare but very expensive flaws without inventing phony metrics to pretend to fit the tasks you think need doing?

"Don't have a catastrophic data breach"

There isn't a middle ground where there is a linear correlation between a serious compound failure and fixing bugs or writing tests.

You could come up with a methodology for a "security score" and track that. However it is both a transparent workaround for having specific bugs / tasks in OKRs and it is likely that your invented metric will be bogus and will lead you to do irrational things.


IMO, you don't. Operational security isn't (or shouldnt be :)) a specific _ business goal_ for a company. With that I mean it's not a specific objective to achieve: it should be a base principle that should be part of your engineering (and operating) principles that you apply to do anything.

Put differently: you have a way of working that includes applying measures for security. For instance, you might do threat modeling at an early stage of your project/iteration. Those sessions may result in concrete work to perform. You might also perform a security test at the end of an iteration that confirms things have been covered. That adds to your workload and affects your ability to achieve goals in a certain way. You apply this way of working to move toward a goal.

So, over time, you will find out that applying these principles have a certain outcome. It may be positive: "we're not seeing security issues and development pace is OK" and it might be negative ("security tests are coming back bad", "we're hacked" or "development pace is snail-like"). When it's negative, that means you have to do things that will slow down achieving your goals.

That kind of negative feedback should lead to a poor(er) performance score on the KR when reviewing them. It's this kind of feedback that should factor back into your decisions moving forward to achieve the goal. In this example, if you have negative findings, you might choose to educate engineers on security matters, streamline your engineering principles, increase the team size and/or even replace people.


If you put off security work until "we're hacked!", it's too damn late. You can't bolt on security after the fact. Similar for testing, refactoring, etc. Your approach here is a recipe for putting off important work until it's too expensive or flat out too late.


I agree. I shouldn't have added 'were hacked'. It distracts from what I'm trying to say. Your conclusion that I'm saying is a recipe for putting off work is exactly opposite of what I'm trying to say.

My point was that you should have a certain set of sane engineering principles (security being one area they should cover). They should be sufficient to todays standards. These principles are not/should not be business goals: they are tools in achieving goals in a responsible and reproducible way.

I am also saying that if you get feedback that these principles are keeping you from you should include them your evaluation in determining next steps to move forward without dictating a specific manner of how you should deal with them; that's up to the specific situation at hand.


> "Don't have a catastrophic data breach"

There is no planning process that can effectively deal with long tail existential risks like that. It's silly to fault OKRs for not solving it when nothing else has.

Best case scenario is something along the lines of:

Objective: reduce annual carry cost of IT Data breach insurance KR: Do project X to insurance co.'s satisfaction to lower our premiums KR: prevent regressions in existing compliance by running regular audits

While this outsources the scoring problem to an insurer, at least they have multiple customers to amortize over and extract some data from.


When would you say you are "done" for a given period with "not having a catastrophic data breach", regardless of OKR? I know some tasks, such as this, is never really done. But for a given period (such as a month or a quarter), it does not help to have a task that just says "Don't have a catastrophic data breach" - you have to turn that into something you can actually do, within a given timespan, and that works fine with OKRs (and should be done without OKRs as well)


My objection to or misunderstanding of OKRs lies exactly here where there are no satisfying answers to this question.

If a sizable chunk of work can't be covered by OKRs then why are you using them or how do you use them with the understanding that they cover partially and inconsistently work and achievements.


That is an actual objective at Google. Supporting key results would be things like performing a certain number of audits, conducting pen tests, code reviews, etc.


Not immediately obvious from the text indeed, but I'm certain they mean (the joy that is) XMLSEC and JWS here. Not sure about JWS but XMLSEC certainly has similarities to the format described here: you define exactly what parts are included in the signature.


Sorry buddy, us dutchies got them too. Also, I think 'wascloths' aren't as special as you think.


Yeah, Dutch too, that's correct. But almost nowhere else


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: