“ Those successful companies aren’t successful because they use OKR’s. They use OKR’s because it is designed to leverage the empowered product team model.
And as I have tried to make clear with years of articles and talks, the empowered product team model is a fundamentally different approach to building and running a tech-product organization.”
Amen to this. Google is an example company that claims to have had great success with OKRs. But let’s not forget that before OKRs, they had empowered teams, a growing market which they disrupted and grew, and a unique culture.
My bet is they would have done spectacularly well without OKRs, using some other method as well. You can’t say that OKRs caused Google to succeed, and they certainly won’t be enough to turn a company or org around, like the author argues.
In general I'm skeptical whenever anyone wants to copy Google as their advantages and disadvantages are so far removed from a typical company and always were. I don't think they're a particularly good company to model for most companies.
Good Times: pick something that can amplify abundant resources. There is going to be less bureaucracy around decisions and spending so move quickly and place lots of bets while you can.
Bad Times: pick something that works with meagre resources. You're going to spend more time justifying spending.
Google is a fantastic engineering company, but terrible at building products.
Yes, you can find flaws in any of those but it's also hard to find better products in the same category. Google is not in any way "terrible at building products", they are one of the best companies in the whole world in building products.
Maybe it's more accurate to say that Google are one of the best companies in the world at buying products.
All those acquired products were nearly non-existent compared to post-Google era. The founders may deserve some credits, but it's mostly Google's job that bring them into the real products. Let us not be that idea guy; what really matters to success is execution.
Add in a company that can fund operations, provide user flow, provide a pool of advertisers, real compensation to engineers, and you have a real success.
I give Google a lot of kudos on this.
Do we really want another SUN Microsystems happenning?
False analogy, because:
- Software provides the most utility, not the hardware, which is why IBM needed Microsoft more than vice versa, and why Android wouldn't have been nearly as much of a success if they'd been absorbed into a single phone hardware company.
- Google in 2005-2010 was a very different company to IBM in 1980; i.e., much more innovative, fast-moving and highly motivated to grow fast to rival the iPhone.
And maybe that would have been for the best. If we look, within a year of the HTC Dream (G1, First commercial Android phone) Nokia released the N900, an amazing device running real Linux.
We could have had Maemo instead of Android.
YouTube is full with ads and fake comments including spam.
I agree with the parent post that Google products benefited most from being first or being acquisitions - it's like Microsoft early 2000s or so.
> gmail is the best
That's a very subjective statement. I don't think gmail is the best webmail out there, but it's certainly one of them. It's also one of the oldest. It also adds the contents of all your emails to their profiling service.
I stand by my statement that Google is terrible at building products, they've only built half of the things you listed and bought the rest. One of them could barely be counted as a success (Cloud), and another isn't a product (Android)
In the last decade, I can't remember a single innovative google original product, or some copy-cat product that is better than the competition
I've been working with GCP a lot during the last two years, and I think the experience is actually quite a bit better than people generally assume or adoption numbers suggest. It's probably not a great fit for large companies with really big engineering departments building really complex systems, but getting started on GCP as a small to medium org is a breeze, products at different levels of abstraction complement each other well (so it's easy to grow) and everything seems to be built to be minimum hassle and require a lot less fiddling by an engineer to get going than the corresponding AWS product. Much better web console than AWS, great K8S offering, BigQuery can do machine learning nowadays, which is actually usable for simple tasks. Everything has lots of sane defaults, it's usually only a couple of clicks to spin up pretty much any resource; in AWS this tends to be a huge pain.
Although, if you're in the EU and bound to a strict interpretation of GDPR, then your experience is going to be absolutely miserable, because there's no way to actually restrict data to stay within the EU for the vast majority of products. A lot of the serverless products has something like "data may be processed in US West or global" hidden in their docs (which legal tells me is a big no-no), and while you can set allowed regions, even BigQuery violates that silently, so that's useless and dangerous as well.
So if they were to get better at selling what they have (which they really, really suck at; we're using them not because, but in spite of, their sales efforts) and maybe discover Europe one day, then it'll be a very strong contender for AWS/Azure.
OKRs are one of many parts of that, in my opinion.
The impression is based on lots of anecdotal experience. I know many people at Google, I use many Google products, I talk to other industry leaders about their impressions of Google.
Take, for example, Google Tasks. It has, roughly the same number of features, with the same design as when it launched two years ago. Presumably there are engineers assigned to it. What on earth are they doing? Meanwhile, Microsoft Tasks has subsumed all the features of Wunderlist and has become a far better to-do list app. It's a small example, but it's emblematic.
The same applies to Google Apps. Google Docs, Google Sheets, and the rest haven't had any substantial new features or (more importantly) performance improvements in 10 years. Try loading a hundred-page Word document in Google Docs and it still lags out.
And then there's chat. Oh, lord, Google's chat applications. How many do they have again? Five? Six? Which ones are they canceling this year? Every year, they promise that they're going to "unify"  chat, and every year they fail to deliver on that promise.
The only product at Google that actually seems to get updated and maintained is search. Everything else is either a half-baked science experiment that is soon to be canceled or a stagnant leftover without a clear product roadmap. You can quibble that this is bad product management rather than bad engineering, but as far I'm concerned, bad product management is bad engineering.
Another is the frat boy hazing code review culture. Heck there are plenty of noogler’s first code review memegens floating around. It’s totally not bullying it’s just joking right?
The author of that article completely confused about the goal of protobuf. Read this rebuttal comment, written by the previous owner.
proto v2 and v3 let you distinguish set and unset default values (no idea about v1)
> If it were possible to restrict protobuffer usage to network-boundaries I wouldn’t be nearly as hard on it as a technology
It's a serialization format. It doesn't claim to be anything else. When people use it as their application's heap data model, they're misusing it. People are lazy and love to use their wire format as an internal data model (no one likes writing converters to/from the wire format), so this problem plagues everyone, but wire serialization is a fundamentally different problem from representing data within your application. When you conflate these problems, you get abominations like Java Serialization.
I agree with this and the author of the criticism evidently does too, but his position is that failing to make this distinction indicates that protobufs are poorly designed. I don't see how "programmers are lazy and fail to work around the poor design" is much of a rebuttal.
In my mind a good design has one or more appropriate representations for each abstract type. The marshalled representation is certainly one, but one might also want more than one in memory representation of the same type depending on the problem domain. The old Lisp trick of representing an immutable linked list with a single contiguous block of memory is one example.
The author never offers any evidence that protobufs fail to make this distinction, just that people are lazy and misuse them.
You can't control what people do with generated types. I don't know how you'd even write a linter for Java or C++ that would know what an appropriate usage of protobufs is, to say nothing of writing such linters for every target language and then integrating them into every possible build system, CI framework, and code review application.
Those thorough Google code reviews you complained about earlier -- one of the things they taught me was not to use protos as the application data model. Read the proto off the wire, then convert it to another type or if you're in a hurry wrap it in another type that does validation checks and hides the proto accessor methods.
I wouldn't try to massage protobufs into satisfying that need, but I do agree that it's an area that should get more research and development. Looking at it from a linter perspective is making the problem way harder than it needs to be though. Verification is, generally speaking, a much more difficult problem than construction. For example it's much easier to construct a product of two large primes than it is to verify that a given number is a product of same. Anyhow, I'm not one of those people who thinks the theorists know everything and practitioners are all idiots, I'm just one of those people who think practitioners should learn from theorists and that, sadly, the former are often irrationally resistant to the notion. Not always though, TLA+ is a great example of a tool working software engineers use to build real systems that are theoretically verifiable.
I was being a bit cheeky about the code reviews. I do think that the reviewers delighted in the opportunity to be shitty to an unseemly degree, but I also agree that it was net beneficial. That said I did find the engineering quality in the SRE orgs was significantly higher than the SWE ones. Which is the complete opposite of every other company I've ever worked for.
Search and AdWords are probably 'well run' by whatever measure. And they make money with radical surplus, kind of like pumping sweet Oil cheaply.
And like such places that have cheap oil, a lot of other things are dysfunctional.
We can look past some failures like 'Wave' as merely taking risk, which is fine. But strategic disasters like 'Google Plus' show an inability to do something coherently across business units.
YouTube, in many ways, has a really, really poor interface and some really bad and slow-moving features. But hey, it makes money.
I worked at a major handset maker that made money hand-over fist on devices, and it was laughable how dysfunctional some things were.
This is normal.
OKRs are probably a minor contribution to G's success, and maybe they help confirm G's unique 'culture' as much as anything, in that it's powerful to be able to say 'this is how we do things here', it's like a kind of 'internal branding'.
I don't know too much about what happened after that, but I suspect not having control over the money making apparatus made it hard to control their destiny, and now we have Verizon owning them and finishing up the death spiral.
I encountered a ton of cool technical solutions to things that were Really Hard Problems in the 90's or 2000's, and that was appropriate for hardware from that era, but the outside world moved on, while everyone inside Yahoo still thought that their shit was hot shit.
There were for sure a lot of smart people still working at the company, but the company had 0 cool factor, so they had an incredibly tough time recruiting and retaining people that could move the state-of-the-art forward.
...and no buzzword management strategy like OKR's or whatever is going to change that fundamental problem.
Google does seem to sabotage a lot of its products, but so far hasn't really sabotaged its core products or its company. As long as they don't destroy the dollar printing press (Adwords/search), they can keep pissing away money on dumb things and it won't kill them.
Shame, because that was the one thing they were quite good at at the time.
I think the existence of other companies that were in similar starting positions, and then subsequently failed, would prove that you need more than just that starting position to succeed.
What allowed them to 'win' in that space decades ago? I don't know, nor do I feel the need to know in order to just much more recent history. It could have been organizational structures that either in a different time, or with a different group of people, or at a different scale, or whatever, were the key. Or something else. I have no comment on it.
But I have a very limited view, having lived in SF for a few years and just having insight into the companies I've seen. I bet there are loads of them that I'm just ignorant to.
Also from the article:
"You can’t take your old organization based on feature teams, roadmaps and passive managers, then overlay a technique from a radically different culture, and expect that will work or change anything."
What I really appreciate in reading this article is that they were absolutely worthless for the reasons this author has stated: there was no real empowerment behind them. Most often they were used for personal development since I had no meaningful say in my actual work. It's always a stretch to put personal development in terms of OKR's: "I'll write two blog posts this quarter." "I'll run one lunch and learn"-- ok those are fine activities but we hardly need OKR's to grow as engineers-- and just imagine how annoying it is to be at quarter's end struggling to write a blog post when you find out midway that you're enjoying learning in an entirely different manner.
Well you can't choose your execs, you can't choose your culture,.. so thank you for describing what is the ideal world, but almost no company will be able to apply your advice.
Again in this article he doesn't explain how to replace OKR, but just says, if OKR doesn't work then your culture is the issue
After all, what's the value in a tool aimed at providing alignment in decision-making, if nobody actually gets to make decisions?
So yeah, idealism, but of a form that's worth aiming for.
Can you choose where you work?
It pretty much boils down to: Set well defined, well reasoned, and slightly ambitious goals...communicate them to your developers...and then leave them the hell alone to try and accomplish them.
Like no, hard no. OKRs are great, I personally love them but if your company has deeper issues a set of OKRs won’t help and likely hurt as it obfuscates the underlying issues that exist.
This has always been my experience with OKRs. They were implemented by directionless companies with weak management as an attempt to bring structure, but the real reason there was no structure or coherence is that nobody in leadership agreed about what the company should do.
They ended up being such vague, arbitrary, and ambiguous goals, like "dominate this sector" or "become a leader in that vertical", or "hire x engineers by y time" that they were effectively meaningless. What does dominate mean? What are these new engineers going to do once they're here? Nobody had answers to these questions, yet achieving the OKRs was paramount.
I'm sure there are examples where they work well, but I suspect this author is onto something and the companies that use them successfully would be well managed with or without them.
From TFA: "most companies are not set up to effectively apply this technique". Yes. And if you read the above book, you'll learn that this is sort of the point. You WILL fail at your first few OKR cycles, but you're supposed to use those experiences to change your company into one that CAN set and achieve objectives.
If you think your company is one where OKRs won't work, all the more reason to do them.
OKRs are about creating and communicating strategic short-term objectives across the company, so they remain top of mind. And that's 80% of what you need to know! The objections to OKRs mentioned in the article don't make sense to me, because it seems to follow a different definition of what OKRs are.
Fundamentally, OKRs only make sense if your organization is focused on outcomes, not on output. And underlying that, that's really about the culture in your org. It requires a good executive and sr management team to actually cascade the OKR into non direct financial terms. E.g. if your product objective is to increase retention by X, how do you translate this into a strategy ?
Also, OKRs don't make much sense if product/engineering are run separately. This is still extremely common outside of tech companies. The cascading, which I considered as one of the most fundamental difference compared to traditional management by objective, is often a productivity killer, because 2. If you do OKRs every quarter, and you have 3-4 levels to cascade to, you're gonna get your OKRs end of first month, which means you realistically only really have 6 weeks left in your quarter.
Finally, Measure what matters, I don't understand that book. I found it completely worthless as an Eng. Manager, with absolutely 0 actionable insight. It could have been 5 pages. The famous example of a football team is the only example that actually has enough details to explain things. On similar topics, high output management, or even hard things about hard things, were much more useful for a middle manager like me.
I know this is HN and you could totally be Drew Houston; but I'm going to go out on a limb and say that your company has fewer than 500 employees.
The problems the OP is describing are, in my opinion, usually present at a large, bureaucratic company. There is no strict size definition for this (I've seen incredibly bureaucratic 50-person startups), but Dunbar's number  is a good rule of thumb.
> You WILL fail at your first few OKR cycles, but you're supposed to use those experiences to change your company into one that CAN set and achieve objectives.
Once you get into a bureaucratic company, it's largely about avoiding failure or the perception thereof. Since you own your company, it's easy for you to take this overall view of "if it failed, it failed". Any mid-level manager or individual contributor is going to be incentivized to avoid the perception of failure since it's super-bad in a large organization. Google is often used as a counterexample, but I'm not entirely sure that Google is a company that's good at making products. Also, Google uses the approach of "Hire extremely smart people and throw a truckload of money at them" approach which most companies under discussion (including yours) likely don't; which (IMO) is a far better predictor of success than OKRs.
> The objections to OKRs mentioned in the article don't make sense to me, because it seems to follow a different definition of what OKRs are.
This is the No True Scotsman fallacy. The reality is that the "cascading" nature of OKRs often gets lost in translation and doesn't take into account macro-level changes in your specific vertical. Yes, the company's goal is to become a leader in the field of underwater baskets; how does this translate to me rewriting that terrible frontend code that the cofounder's buddy wrote in 2007 and has never been touched since? Doing that translation will become more difficult year by year due to how technology evolves; and most people in "leadership" suck at doing that translation well. That's what the article is talking about.
What the book DOES say is that you should not create such a fear of failure that no-one will set stretch goals, and that OKR results should not replace individual performance evaluations. In short, OKRs are what you seem to think they are, plus the culture that makes it possible. If an organisation doesn't get that right, are they doing OKRs?
If your organisation does not allow developers to push back against impossible objectives, do you not perhaps have bigger problems that OKRs (or any other system) can't fix for you? Why would you blame OKRs - would any other system not also be painful when a company has become so dysfunctional? Also, did you know that all objectives don't need to cascade - some can be set bottom-up?
You're going to call No True Scotsman fallacy, and I'm going to call Straw Man, so I'm done with this thread. Perhaps I just fell for John Doerr's elaborately constructed fantasy that he uses to sell books. But right now it seems like a useful tool to keep a company on track, and I'll keep adapting it as the company grows.
But John Doerr's book introduced me to the bridge of concepts that I was not seeing: in a healthy setup, we are first and foremost focused on a qualitative objectives and THEN we attempt to model that fuzzy feeling with a quantitative measure (the "key result") that should reflect success. It takes several iterations in order to come up with a matching measurement, and even then we need to constantly re-evaluate whether the measurement is appropriate or if is devolving into a numbers game devoid of true objective.
In other words the full acronym is OAMBKY, or "Objectives, AS MEASURED BY Key Results". But that doesn't roll off the tongue quite as well.
So in that light, OKRs are a useful tool IF AND ONLY IF leadership—as well as the whole team—are focused on the philosophical, qualitative goal and are all aware that the measurement is only an imperfect proxy that is contantly re-evaluated to help us better assess the goal; not a goal unto itself. But that takes real leadership to drive that message (as well as avoiding setting up misguided incentives).
 When a measure becomes a target, it ceases to be a good measure.
I mean, this is kind of what the article is saying; just along a different dimension.
> Perhaps I just fell for John Doerr's elaborately constructed fantasy that he uses to sell books.
A lottery winner can write a book about their excellent saving habits and how they helped a lot. If you follow their advice, you may do quite well, and gain some wealth!
But the advice can also be largely unrelated to why they are wealthy; and people hoping to be multi-millionaires as a result of those habits might be disappointed. This does not mean that the lottery-winner's advice about being frugal and saving money is inherently bad. Go ahead and save, by all means.
A set of systems won't work for 100% of the cases because that system was designed around a particular team. And there are times it won't work well.
Whether the system is good/effective has nothing to do with "you aren't doing it right" part.
The "you aren't doing it right" can be applied to literally everything.
If no one can actually successfully use your framework the right way, maybe the framework isn't as generally useful as it purports to be.
Having gone through scrum and OKRs I agree with the idea that people tend to sell them to be 'generally' useful at least on the surface.
But once we started doing training/testing towards scrum, it's very clear we needed change as an organization. Some people ended up leaving as a result because it's not everyone's cup of tea.
I can't say the same for OKRs because I joined and it was already in place pretty effectively, but I imagine it to be similar.
I've seen tech OKRs that are things like "reduce tech debt" or "decrease loading time" or "reduce downtime" but without hard tangible stories in sprints, it's just wishful thinking.
How could it be any other way (and work)?
We're a small org that's mostly experienced consultants and contractors. They're energetic, proactive and independent people.
We've adopted OKRs to basically get everyone pulling in the same direction, but without being prescriptive about how the work gets done.
I'll report back at the end of the quarter on how it's working out, if anyone's interested :)
Some of them are rather impossible, like teaching all devs on how to do devops within 2 months, so they can be self sufficient.
There's so much wrong with this key result it's incredible :( One big reason that OKRs can fail is just flat out bad OKRs. Teaching all devs to do "devops" within two months ignores the actual issue that they seem to be trying to solve (which I'm guessing is that not having sufficient ops is hurting the company and velocity). It's far too easy to fail and puts pressure on individuals rather than letting the team find the right solution.
If your teams are fairly self-organising, if you are small-a agile and building your process as you discover more, you can plan in things like “dev A will spend 50% of this sprint learning dev-ops, without doing feature work or tech debt; and 10% shadowing dev B who already does Ops”.
That seems actionable, and measurable. Over two or three sprints/iterations/retrospectives, you’ll probably be able to measure the impact of the training (more successful releases, higher uptime, or more story points covered because ops is less a bottleneck etc).
The 'ing' at the end of the verb suggests that this is an activity, rather than an objective nor a key result.
If your company is using the term "OKRs" to refer to things that aren't OKRs, then perhaps they could use some help from someone who has used OKRs successfully elsewhere.
The same thing could happen with any new way of working. Imagine a company where no one has used TDD before. Some leaders decide everything must use TDD from now on. People try to work in this new way but, lacking experience and role models, they write their tests at the wrong level of abstraction and end up constraining future refactoring instead of enabling it. Would such an experience mean that TDD is bad or useless?
New executive hired who loves OKRs. Ends up not changing anything because the engineering OKR is “work on the product roadmap” aka what we’ve always been doing.
Only lasting impact is there are a few more slides in the company wide meetings.
At least this Key Result is measurable: 100% devs know how to do devops. :)
But who is the individual accountable for implementing this Key Result? For a task that broad, I hope a department manager is responsible for creating a training problem, not individual devs responsible for teaching themselves.
Estimation based on initial guts should be quadrupled and rounded to the nearest quarter... so 9 months. This should provide a 90% percentile chance of success.
I’m not a fan of it myself. The idea feels like growth hacking, but over many quarters it doesn’t feel great to always be falling short of goals. I always assumed this is part of the OKR model, but maybe it’s just my company.
How do OKRs relate to devops timelines?
OKRs become a shield to deflect any external request that doesn't precisely align with a measure.
Not a fan.
Collaboration should be a goal. A management structure that doesn't explicitly make it one is at fault, and they would have failed regardless of the organizational methodology they chose.
I am absolutely no rejecting the notion that maybe my company is very bad at doing OKRs but for me they have always resulted in "working to the metric" and have totally inhibit team autonomy - you become ruler by the KRs.
EDIT: the first time my company tried OKRs a company culture of collaboration and experimentation was brought to a shuddering halt as everyone stopped trying to do what was best for the business and started trying to hit their OKRs instead.
I feel the opposite. There is too much cross team collaboration that "feels good, doesn't do good". OKRs cuts through the BS, and makes you evaluate if your collaboration is actually doing anything measurable. Focus typically has this effect.
But furthermore I think your comment is suggestive of the "You can't manage what you can't measure" fallacy that underpins OKRs.
I understand most of the comments but as you grow your startup will come a time when it becomes useful when you have 70+ and even more thousands of employees. The bigger the company the harder it is to manage, google famously hated management and ended up with OKRs. My understanding from a couple of ex-googlers is that now most people are just gaming the system and aiming at a 0.7 score so I'm not sure if they are still as useful as they were at google.
OKRs / KPI etc are not magic, at the end of the day the article is right, the team is what matters, how teams align and communicate with each other is what OKRs can help with (I find them better than the old top down KPIs personally) I was asked by a startup to explain OKRs and I kept getting the CEO to ask me questions related to them not making sense (e.g some teams argued that goal X did not make sense to them etc...) For me this was not an issue with the OKRs per se, but OKRs helped to show very clearly that the organization was not well structured for the general goals, so worst case if OKRs don't work you should be able to tweak things and measure the impact (that's the all idea : measuring to improve)
If you have a startup with less than 300 people and you have good communication and goal setting, don't use OKRs. Use KPIs and then nehlp your teams to keep iterating.
Input management is so much more effective than output management but it needs visionary leadership. Read: High Output Management - Andrew S Grove (ex Intel CEO).
If you're doing OKRs without some type of indicator, kpi or otherwise, I don't see how you're achieving key results.
OKRs are being rolled out at work, and there's a pretty clear managerial vacuum. You could call it the KRO: leadership makes a request that trickles down to line staff for projects and goals, then each layer of management tries to glue them into a coherent Objective.
First, there's a dearth of high quality engineers in the industry. They've been gobbled up by SV companies with large paychecks and promises of "fuck-you money" paydays. Everyone else gets developers in the range from above average to bad.
Product teams only work with high quality engineers across the entire product. Why? Because if they're shitty, there's no bound to creep of shit code, call it "shit creep" (see footnote 1)
How do you fix a shitty product team, other than replacing the entire team, or product with hopefully a better team?
Sure, feature teams are more limited in scope and their abilities, but they at least constrain the shittyness to a feature. If the feature needs to be replaced, the team can just be replaced, not the entire product.
(1) We've all had to deliver shit code from time to time. The thing is the good engineers know it's shit and refactor it as soon as they can.
The idea that all engineers outside of a few Silicon Valley companies are mediocre is laughable if you've ever worked at a FAANG. People pick all sorts of jobs for all sorts of reasons.
My big worry about OKRs is that they will descend into the same shitstorm that ruined SCRUM. Daily standups were never intended to turn into a daily opportunity for micromanagement and application of deadline pressure, but this is exactly what they became. And I can't say it was an external corruption of the process - the dangers are inherent to the technique. I always felt that saying standup and scrum are good, you just have to be careful someone external to the process doesn't corrupt it don't turn into micromanagement is like saying nitroglycerine is good, you just have to be careful some chemical reaction external to the process doesn't cause it to suddenly and unpredictably combust. The instability is inherent to the substance.
There was also a mini (or not so mini) industry of industry consultants helping people "do agile", that elevated "processes and tools" over "individuals and interactions", kind of how the 10 animal commandments in animal farm ended amended to mean the opposite of how they were written down.
I see a danger in point in OKRs, in that they're a great way for a manager to turn a casual projection into a committed estimate. I see the inversion of "Customer collaboration over contract negotiation" all over again here. The developer (or employee) is lulled into a belief that we're all friends and can trust each other, the "customer" (or client or boss) is lawyering up with a contract, and when time comes, the developer's feet will be held to the fire with firm commitments, and subjected to daily deadline pressure and micromanagement (because agile).
All I'm sayin' is, this could go sideways. But then again, so can any job.
But after a few weeks, indolence came out because someone forgot to submit the OKR. And other members also do the same thing. Pretty much of a "broken window effect". The workload was very high, so OKR wasn't attached any importance. Or maybe we did it in a wrong way.
It doesn't mean that we might not have to align with OKR, but the method of implementing the OKR insistently and continuously should come first before it takes places.
Ops can totally have OKRs. Fewer incidents, faster resolution, faster deploy times, longer log retention, decreased AWS spend, the list of objectives I can think of off the top of my head is quite long.
Same for project managers: on time projects and reporting, better estimates, faster issue triage, more a/b experiments, more features launched.
OKRs done well are effectively a method of improving KPIs by recursively breaking work up into subtasks.
Well, of course Ops can have OKRs as an HR evaluation method. KPI's shortcoming is obvious, most of the results need to have a quantified indicator which is not very appropriate for engineers' situations.
But it still can be used as a complementary of OKR to provide clear numbers or date time as a target.
I just guess that iterating the OKR every week and month is quite challenging. And most of the OKR stays unchanged even after months of iterations. HR and managers need to figure out the way to keep everyone update their OKR effectively.
I worked at a company that used OKRs but the teams weren't set up in a way that people had responsibility for the things that their OKRs wanted them to do, and it produced very little (thankfully the teams were generally good and it wasn't a disaster, but it was a waste of effort).
If the OKRs aren't embedded into the sales teams as well (which basically means you can't have separate sales teams) then it likely fails.
Quote: "OKR’s are first and foremost an empowerment technique."
Don't empower anyone. Instead work on getting rid of / minimizing / improve on anything that dis-empowers. This is usually easier, cheaper and far more straightforward. Not always easy to get rid of disempowerment but you can at least put things in place to minimize it. Empowerment is barely measurable in real terms. All the metrics are indirect. Even turnover doesn't accurately measure it. Dis-empowerment on the other hand is a stinking mess. Follow your nose. Or your heart.
Quote: "Manager’s Objectives vs. Product Team Objectives"
One of these needs to be fired / removed / re-evaluated. If the manager is fighting the team then who's more in alignment with the overall vision? Why would you tolerate a manager that is fighting their team? Or vice versa? If the team members are across leaders and there's issues then there's some misalignment that needs to be fixed. Now. Iceberg! Change direction or sink!
Pointless drama and deliberate friction created in large organizations is why they squash innovation. Its why they do mindbogglingly stupid things. Its also why a manager or executive can benefit when a team fails, even their own. Incentives can be so screwed up that if a division fails, a number of people in the division cheer because their individual incentives are all green. This is too common.
Another quote: "The main idea is to give product teams real problems to solve"
You're doing WHAT?! How can you possibly tolerate non-real problems? Why would this be even necessary? That's like saying, "it gives them a way to generate profit". Really? That's a thing you're going to add?! As if its some new thing? What?!
Quote: "Passive manager"
What is this creature? How can you passively manage anything? Not even drunk people are passive. Only when they become unconscious do you start to have passivity. I've met managers who's teams zip along with the manager gone for a month. This isn't rare and its still not passive if adult supervision is in place. Why? Because an active manager puts processes in place to monitor and optimise what is going on. This is active management.
Quote: "Stop doing manager objectives and individual objectives"
Correct. Any mismatch means pointless friction. Why was this tolerated? Perhaps because drama at the bottom and middle keeps people too busy to notice the silliness at the top? Maybe. Regardless, it sounds expensive: How's that productivity going?
Quote: "Leaders need to step up"
Sure, and they should be allowed to lead. Which is often NOT the case. Plenty of managers leave team leaders in the dark about key aspects about what is going on and what is planned. Leading means choosing a direction. How can you know the direction to choose if you don't know the destination? This applies generally as well. Instead of expecting leaders to step up, why is there a step in the first place? Shouldn't there be a level playing field? Sort out organisational mess so its as level as possible - remember the bit about removing disempowerment? I wasn't kidding. Here's a symptom: the need to step up when no step should exist.
Go for enabling and self-directed team members that can tolerate and operate successfully with adult supervision. This kind of supervision requires managers, directors and team leaders as well.
One last thing: Get rid of deadlines and use due-dates instead. Not just wordplay. We need to be using project X on date Y. Make it happen and don't drop dead doing so. Change the mindset to fit this approach. Those deadline crunches? Not good. Dropping dead after a due-date means something went very wrong. Everything must be re-examined to avoid it in future. The mess has to be cleaned up. Supports put in place. Additional followups scheduled and kept. This works for projects as well as childbirth.
Ok. I feel decaffeinated and that is dis-empowering. Time for coffee. See? Fixable.
Why is this happening to me?
I see this at the bottom of the page, and I’ve been seeing it more and more. I’m using the Duck Duck Go browser on iOS.
For example, Austin's Measuring and Managing Performance in Organizations gives a helpful 3-party model for understanding how simplistic measurement-by-numbers goes awry. He starts with a Principal-Agent and then adds a Customer as the 3rd party; the net effect is that as a Principal becomes more and more energetic in enforcing a numerical management scheme, the Customer is at first better served and then served much worse.
As a side effect he recreates or overlaps with the "Equal Compensation Principle" (described in Milgrom & Roberts' Economics, Organization and Management). Put briefly: give a rational agent more than one thing to do, and they will only do the most profitable thing for them to do. To avoid this problem you need perfectly equal compensation of their alternatives, but that's flawed too, because you rarely want an agent to divide their time exactly into equal shares.
Then there's the annoyance that most goals set are just made the hell up. Just yanked out from an unwilling fundament. Which means you're not planning, you're not objective, you're not creating comparative measurement. It's a lottery ticket with delusions of grandeur. In Wheeler & Chambers' Understanding Statistical Process Control, the authors emphasise that you cannot improve a process that you have not first measured and then stablised. If you don't have a baseline, you can't measure changes. If it's not a stable process, you can't tell if changes are meaningful or just noise. As they put it, more pithily:
> This is why it is futile to try and set a goal on an unstable process -- one cannot know what it can do. Likewise it is futile to set a goal for a stable process -- it is already doing all that it can do! The setting of goals by managers is usually a way of passing the buck when they don't know how to change things.
That last sentence summarises pretty much how I feel about my strawperson impressions of OKRs.
 https://www.amazon.com/Understanding-Statistical-Process-Con..., though I prefer Montgomery's Introduction to Statistical Quality Control as a much broader introduction with less of an old-man-yells-at-cloud vibe -- https://www.amazon.com/Introduction-Statistical-Quality-Cont...
I would argue the system is working as intended. Contrary to your assertions, you don't want employees spreading effort like peanut butter, you want to focus them on executing one or two things quickly and getting value out of that quickly. Instead of launching 12 features a year from now, I'd rather launch 1 feature a month.
> you cannot improve a process that you have not first measured and then stablised.
There is of course, a certain amount of reasoning under uncertainty involved. One of the lessons many folks learn from a/b testing and OKRs is just how hard it is to actually make a difference, and folks need practice calibrating.
That's not quite what I was driving at. Optimisation is made on the measurement. Measurement is only necessary because the Agent is not perfectly observable, there is an information asymmetry between Principal and Agent.
That's why Austin's model is so helpful. There are many things that must be done in order to best satisfy the Customer. Some of those are measurable, some are less measurable. But a rational Agent looks at any basket of measurements and will optimise for one of them: the one that pays best.
It's not enough to say "just this one feature and no peanut butter please". You have to define what the one feature is. You have to provide an exact measure for it. Agents can then either optimise honestly, or they can go further and optimise fraudulently. If honestly, the Principal realises that they actually need a basket of values to be optimised. But then they need to apply equal compensation, because the Agent will simply ignore any measurement that doesn't maximise their results.
I believe measurement is useful. But I also believe that connecting it to even the whiff of reward or punishment is beyond merely futile and well into being destructive.
I've read John Doerr's Measure What Matters OKR book and personally used OKRs for a few quarters. Google's re:Work site about OKRs is short and adequate summary:
PMF = Product Market Fit
Unsure on the OR.