Hacker News new | past | comments | ask | show | jobs | submit login
Why OKRs might not work at your company (svpg.com)
317 points by codesuki on July 19, 2020 | hide | past | favorite | 135 comments

This is the key of the article:

“ Those successful companies aren’t successful because they use OKR’s. They use OKR’s because it is designed to leverage the empowered product team model. And as I have tried to make clear with years of articles and talks, the empowered product team model is a fundamentally different approach to building and running a tech-product organization.”

Amen to this. Google is an example company that claims to have had great success with OKRs. But let’s not forget that before OKRs, they had empowered teams, a growing market which they disrupted and grew, and a unique culture.

My bet is they would have done spectacularly well without OKRs, using some other method as well. You can’t say that OKRs caused Google to succeed, and they certainly won’t be enough to turn a company or org around, like the author argues.

Is Google success because it's a well run company in the first place? It's obviously not terrible (I'd personally love to work there) but life is a lot easier when you have the biggest cash cow ever plus a few smaller ones.

In general I'm skeptical whenever anyone wants to copy Google as their advantages and disadvantages are so far removed from a typical company and always were. I don't think they're a particularly good company to model for most companies.

Back when it was popular to have a flat structure or no managers or whatever I started suggesting "Every type of management works until money gets tight." I think the principle still applies, but a more useful/general version is pick your structures and processes around resource availability (good vs bad times):

Good Times: pick something that can amplify abundant resources. There is going to be less bureaucracy around decisions and spending so move quickly and place lots of bets while you can.

Bad Times: pick something that works with meagre resources. You're going to spend more time justifying spending.

Regardless of whether OKRs are meaningful or important (IMO they aren't), Google had OKRs in 1999, less than a year after founding, before AdWords. They didn't have notable product teams before OKRs.

Google seems terribly run, frankly. I wouldn't use them as an example of good process, organization, or really anything other than that being early to advertising and being the internet's front page for ~20 years is a great way to make money

This. Google seems to not have anyone with a cohesive vision or even anyone that knows how to build a platform or maintain a product. I personally would like to know what's going on with the product owners at Google, because it really seems like none of them talk to each other.

Google is a fantastic engineering company, but terrible at building products.

This is objectively false as they have developed many of the most successful products on this planet. There are hundred of web mail products but gmail is the best. Google Maps is the best in class, other great products are Google Earth, Analytics, YouTube, Cloud, Android and many more.

Yes, you can find flaws in any of those but it's also hard to find better products in the same category. Google is not in any way "terrible at building products", they are one of the best companies in the whole world in building products.

Android wasn't built by Google. It was the result of an acquisition. Same with YouTube. Same with Google Earth. Heck, even Google Docs was the result of them acquiring Writely.

Maybe it's more accurate to say that Google are one of the best companies in the world at buying products.

Yeah, except one counterfactual scenario: Andy Rubin actually tried to sell Android to Samsung. Let's assume the deal was made; do you really think Android would grow into this dominant position at Samsung's hands? I know Samsung very well and I am 99.9% sure that it will be a miserable failure.

All those acquired products were nearly non-existent compared to post-Google era. The founders may deserve some credits, but it's mostly Google's job that bring them into the real products. Let us not be that idea guy; what really matters to success is execution.

EDIT: typo

You're assuming that those products weren't capable of reaching scale without Google's intervention. By that logic, Bill Gates should have sold Microsoft to IBM, because that is the only way Windows would have ever reached scale.

I'd argue that indeed, many of these acquired companies could not reach scale w/o Google. Consider Youtube which was burning millions on hosting and always on the verge of bankruptcy if they couldnt raise more funding.

Add in a company that can fund operations, provide user flow, provide a pool of advertisers, real compensation to engineers, and you have a real success.

I give Google a lot of kudos on this.

Do we really want another SUN Microsystems happenning?

> By that logic, Bill Gates should have sold Microsoft to IBM

False analogy, because:

- Software provides the most utility, not the hardware, which is why IBM needed Microsoft more than vice versa, and why Android wouldn't have been nearly as much of a success if they'd been absorbed into a single phone hardware company.

- Google in 2005-2010 was a very different company to IBM in 1980; i.e., much more innovative, fast-moving and highly motivated to grow fast to rival the iPhone.

Google was willing to pay the most. This means it was more confident than anyone else of being able to scale the products.

> do you really think Android would grow into this dominant position at Samsung's hands? I know Samsung very well and I am 99.9% sure that it will be a miserable failure.

And maybe that would have been for the best. If we look, within a year of the HTC Dream (G1, First commercial Android phone) Nokia released the N900, an amazing device running real Linux.

We could have had Maemo instead of Android.

off topic question: what’s the idea behind “EDIT: typo”? HN doesn't indicate in any way that your post has been edited and certainly you’re not changing the meaning in a tangible way warranting the etiquette. Nobody is going to fault you for fixing a typo.

IIRC if you edit after a certain period of time, HN marks your post with an asterisk to indicate that it's been edited.

Buying a company and making it successful is a huge tasks that is often bungled by many companies. Look at the graveyard of M&A by Yahoo. Now contrast that to Google's spectacular M&A record -- YouTube, Google Sheets, Google Analytics, etc.

Gmail is by far not the best mail system - in enterprise settings it's possible the worst actually. Its cluttered, threads are unreadable, tiny window by default to write a message... maybe it was hip 16 years ago, but that's past. Because of all these things I'm not even using it in private setting either.

YouTube is full with ads and fake comments including spam.

I agree with the parent post that Google products benefited most from being first or being acquisitions - it's like Microsoft early 2000s or so.

They have a handful of truly wonderful products, but for everything good you just listed, there's at least 2-3 things that have been shuttered because they were poorly executed, or are currently struggling (Google Cloud). Android is hardly a product either, the Pixel 4 is a product, but Android is an avenue to sell apps on their app store and integrate their data gathering services for targeted advertising, and they didn't even build it. Google didn't build YouTube, it was already the largest video sharing platform in the world when they bought it. Analytics is probably good, I haven't used it.

> gmail is the best

That's a very subjective statement. I don't think gmail is the best webmail out there, but it's certainly one of them. It's also one of the oldest. It also adds the contents of all your emails to their profiling service.

I stand by my statement that Google is terrible at building products, they've only built half of the things you listed and bought the rest. One of them could barely be counted as a success (Cloud), and another isn't a product (Android)

Youtube and Android were acquired by Google, and there are definitely other products that are also the result of an acquisition.

youtube android and earth are acquisitions, Cloud is nothing compared to its competition, GMail is almost 2 decades old.

In the last decade, I can't remember a single innovative google original product, or some copy-cat product that is better than the competition

> Cloud is nothing compared to its competition

I've been working with GCP a lot during the last two years, and I think the experience is actually quite a bit better than people generally assume or adoption numbers suggest. It's probably not a great fit for large companies with really big engineering departments building really complex systems, but getting started on GCP as a small to medium org is a breeze, products at different levels of abstraction complement each other well (so it's easy to grow) and everything seems to be built to be minimum hassle and require a lot less fiddling by an engineer to get going than the corresponding AWS product. Much better web console than AWS, great K8S offering, BigQuery can do machine learning nowadays, which is actually usable for simple tasks. Everything has lots of sane defaults, it's usually only a couple of clicks to spin up pretty much any resource; in AWS this tends to be a huge pain.

Although, if you're in the EU and bound to a strict interpretation of GDPR, then your experience is going to be absolutely miserable, because there's no way to actually restrict data to stay within the EU for the vast majority of products. A lot of the serverless products has something like "data may be processed in US West or global" hidden in their docs (which legal tells me is a big no-no), and while you can set allowed regions, even BigQuery violates that silently, so that's useless and dangerous as well.

So if they were to get better at selling what they have (which they really, really suck at; we're using them not because, but in spite of, their sales efforts) and maybe discover Europe one day, then it'll be a very strong contender for AWS/Azure.


They haven't developed one for quite a while now.

>Google is a fantastic engineering company, but terrible at building products.

not anymore

I don't know that they're a fantastic engineering company. I don't know what Google is great at - a few things, which feel like they're successful almost because they're treated as separate from the rest of the organization.

OKRs are one of many parts of that, in my opinion.

Google engineering is fantastic and respectable. What gives the impression that it is anything else other than excellent?

My personal opinion is that Google has some really strong engineers, and some pockets of engineering are truly exceptional. That is it.

The impression is based on lots of anecdotal experience. I know many people at Google, I use many Google products, I talk to other industry leaders about their impressions of Google.

Their products, especially their mobile products.

Take, for example, Google Tasks. It has, roughly the same number of features, with the same design as when it launched two years ago. Presumably there are engineers assigned to it. What on earth are they doing? Meanwhile, Microsoft Tasks has subsumed all the features of Wunderlist and has become a far better to-do list app. It's a small example, but it's emblematic.

The same applies to Google Apps. Google Docs, Google Sheets, and the rest haven't had any substantial new features or (more importantly) performance improvements in 10 years. Try loading a hundred-page Word document in Google Docs and it still lags out.

And then there's chat. Oh, lord, Google's chat applications. How many do they have again? Five? Six? Which ones are they canceling this year? Every year, they promise that they're going to "unify" [1] chat, and every year they fail to deliver on that promise.

The only product at Google that actually seems to get updated and maintained is search. Everything else is either a half-baked science experiment that is soon to be canceled or a stagnant leftover without a clear product roadmap. You can quibble that this is bad product management rather than bad engineering, but as far I'm concerned, bad product management is bad engineering.

[1]: https://arstechnica.com/gadgets/2020/01/report-google-planni...

Well here’s one example: https://reasonablypolymorphic.com/blog/protos-are-wrong/inde....

Another is the frat boy hazing code review culture. Heck there are plenty of noogler’s first code review memegens floating around. It’s totally not bullying it’s just joking right?

It is in fact just a joke. New grads write shitty code (I certainly did when I started at Google) and after a few months of attentive reviews they write good clean code. It feels like hazing a bit at first because all us arrogant ivy league types graduate thinking we're geniuses and then it turns out, no, we can't even do the equivalent of fetching coffee correctly and have to be handheld through it. But afterwards people joke about it the way they joke about any other shared experience.


The author of that article completely confused about the goal of protobuf. Read this rebuttal comment, written by the previous owner.

Thanks for the link. It's quite a weak rebuttal however. For example "this inability to distinguish between unset and default values is a nightmare" is ignored. And, to me, "If it were possible to restrict protobuffer usage to network-boundaries I wouldn’t be nearly as hard on it as a technology" is the most damning criticism and that's also ignored. The so-called rebuttal basically amounts to "we're using it to make oodles of money in the ads pipeline so it must be good software engineering." The claim about success in the market is completely true, but it's also not a rebuttal to the claim that the designers of protobufs are apparently ignorant of current computing theory and just patched together an ad hoc mess. Nobody capable of observation disputes that the standard substandard quality of most professionally written software is no impediment to success in the marketplace, there are too many examples to claim otherwise.

> this inability to distinguish between unset and default values is a nightmare

proto v2 and v3 let you distinguish set and unset default values (no idea about v1)

> If it were possible to restrict protobuffer usage to network-boundaries I wouldn’t be nearly as hard on it as a technology

It's a serialization format. It doesn't claim to be anything else. When people use it as their application's heap data model, they're misusing it. People are lazy and love to use their wire format as an internal data model (no one likes writing converters to/from the wire format), so this problem plagues everyone, but wire serialization is a fundamentally different problem from representing data within your application. When you conflate these problems, you get abominations like Java Serialization.

> wire serialization is a fundamentally different problem from representing data within your application

I agree with this and the author of the criticism evidently does too, but his position is that failing to make this distinction indicates that protobufs are poorly designed. I don't see how "programmers are lazy and fail to work around the poor design" is much of a rebuttal.

In my mind a good design has one or more appropriate representations for each abstract type. The marshalled representation is certainly one, but one might also want more than one in memory representation of the same type depending on the problem domain. The old Lisp trick of representing an immutable linked list with a single contiguous block of memory is one example[1].

[1] https://en.wikipedia.org/wiki/CDR_coding

> failing to make this distinction indicates that protobufs are poorly designed

The author never offers any evidence that protobufs fail to make this distinction, just that people are lazy and misuse them.

You can't control what people do with generated types. I don't know how you'd even write a linter for Java or C++ that would know what an appropriate usage of protobufs is, to say nothing of writing such linters for every target language and then integrating them into every possible build system, CI framework, and code review application.

Those thorough Google code reviews you complained about earlier -- one of the things they taught me was not to use protos as the application data model. Read the proto off the wire, then convert it to another type or if you're in a hurry wrap it in another type that does validation checks and hides the proto accessor methods.

> I don't know how you'd even write a linter for Java or C++ that would know what an appropriate usage of protobufs is

I wouldn't try to massage protobufs into satisfying that need, but I do agree that it's an area that should get more research and development. Looking at it from a linter perspective is making the problem way harder than it needs to be though. Verification is, generally speaking, a much more difficult problem than construction. For example it's much easier to construct a product of two large primes than it is to verify that a given number is a product of same. Anyhow, I'm not one of those people who thinks the theorists know everything and practitioners are all idiots, I'm just one of those people who think practitioners should learn from theorists and that, sadly, the former are often irrationally resistant to the notion. Not always though, TLA+ is a great example of a tool working software engineers use to build real systems that are theoretically verifiable.

I was being a bit cheeky about the code reviews. I do think that the reviewers delighted in the opportunity to be shitty to an unseemly degree, but I also agree that it was net beneficial. That said I did find the engineering quality in the SRE orgs was significantly higher than the SWE ones. Which is the complete opposite of every other company I've ever worked for.

I beleive G is 'well run' if you think of them as a 'whole' - even if it means 'poorly run' in many ways.

Search and AdWords are probably 'well run' by whatever measure. And they make money with radical surplus, kind of like pumping sweet Oil cheaply.

And like such places that have cheap oil, a lot of other things are dysfunctional.

We can look past some failures like 'Wave' as merely taking risk, which is fine. But strategic disasters like 'Google Plus' show an inability to do something coherently across business units.

YouTube, in many ways, has a really, really poor interface and some really bad and slow-moving features. But hey, it makes money.

I worked at a major handset maker that made money hand-over fist on devices, and it was laughable how dysfunctional some things were.

This is normal.

OKRs are probably a minor contribution to G's success, and maybe they help confirm G's unique 'culture' as much as anything, in that it's powerful to be able to say 'this is how we do things here', it's like a kind of 'internal branding'.

Why didn't the other companies that were those things succeed then? (Yahoo, Altavista, etc)?

I worked at Yahoo from 2004-2011. This is my opinion only. Yahoo was doing pretty well at times during that period. Certainly, Google had better revenue numbers and better margins, but Yahoo was still making tons of money, just not as much. Leadership had a tough time explaining what Yahoo did; although, they had a clear goal in 2004 era onboarding --- be in the top 3 of all internet verticals either through owned and operated properties or cobranding --- basically be the one place you could do everything you want to do on the internet (except porn). Yahoo search tested comparable to Google search in user research if shown with Google headers, but not with Yahoo headers, and it was expensive to run. Yahoo search marketing (yahoo's version of adwords) was not friendly to small businesses, way less realtime and as a result had lower CPCs than Adwords. The Microsoft purchase would have been a disaster (but maybe I'd have gotten a free Xbox?) because culture issues would have ruined the whole thing. The Microsoft search + ads deal was bad for a few reasons: Microsoft didn't make their revenue numbers and negotiated down from the contract about a year in; Yahoo kept a large amount of their search team and didn't get the cost savings they were planning; Microsoft's ad platform is/was worse than yahoo's.

I don't know too much about what happened after that, but I suspect not having control over the money making apparatus made it hard to control their destiny, and now we have Verizon owning them and finishing up the death spiral.

I joined Yahoo in 2013, and my impression was that it was a company that still made a ton of money, because at one time a lot of really clever people used to work there, and their stuff continued to make money, but the people working there were not clever enough to build on it, they were barely able to keep up on maintenance of it.

I encountered a ton of cool technical solutions to things that were Really Hard Problems in the 90's or 2000's, and that was appropriate for hardware from that era, but the outside world moved on, while everyone inside Yahoo still thought that their shit was hot shit.

There were for sure a lot of smart people still working at the company, but the company had 0 cool factor, so they had an incredibly tough time recruiting and retaining people that could move the state-of-the-art forward.

...and no buzzword management strategy like OKR's or whatever is going to change that fundamental problem.

Wouldn't your comment suggest that management DOES matter then? The person I replied to says that google has terrible management but is successful because of their starting position. As you have pointed out, bad decisions by management can lead to a company's downfall.

Yes. Definitely, sustained poor management can lead to a company's downfall. You have to continually sabotage your company to kill it. Just mediocre management or incoherent management won't do it. (But it might set the stage for worse decisions)

Google does seem to sabotage a lot of its products, but so far hasn't really sabotaged its core products or its company. As long as they don't destroy the dollar printing press (Adwords/search), they can keep pissing away money on dumb things and it won't kill them.

> basically be the one place you could do everything you want to do on the internet (except porn).

Shame, because that was the one thing they were quite good at at the time.

I don't know. I don't feel that I have to know to make the statement I made.

Your statement was the google was successful because of their starting position only, and that their bad management doesn't matter because they can't screw it up being in such a great position.

I think the existence of other companies that were in similar starting positions, and then subsequently failed, would prove that you need more than just that starting position to succeed.

My statement is that I do not believe that Google is well run today, or in the recent future, and that OKRs are part of that. I believe that their position is not earned through strong organizational structure in recent years, but that they simply have a strong foothold in an extremely powerful market.

What allowed them to 'win' in that space decades ago? I don't know, nor do I feel the need to know in order to just much more recent history. It could have been organizational structures that either in a different time, or with a different group of people, or at a different scale, or whatever, were the key. Or something else. I have no comment on it.

Curious to know from your point of view; which company is well organised and well-run?

I don't think I know of a company that has really nailed being well run at scale, I can maybe think of a few that I don't want to name but they aren't in tech anyway.

But I have a very limited view, having lived in SF for a few years and just having insight into the companies I've seen. I bet there are loads of them that I'm just ignorant to.

Thanks for the tldr. Maybe good to always add that when posting a link :)

Also from the article: "You can’t take your old organization based on feature teams, roadmaps and passive managers, then overlay a technique from a radically different culture, and expect that will work or change anything."

Well said!

Just copy pasted ;)

I've worked with OKR's at large e-commerce companies, small tech incubators, consultancy firms and "Unicorn" stage startups. At each stop OKR's were absolutely worthless.

What I really appreciate in reading this article is that they were absolutely worthless for the reasons this author has stated: there was no real empowerment behind them. Most often they were used for personal development since I had no meaningful say in my actual work. It's always a stretch to put personal development in terms of OKR's: "I'll write two blog posts this quarter." "I'll run one lunch and learn"-- ok those are fine activities but we hardly need OKR's to grow as engineers-- and just imagine how annoying it is to be at quarter's end struggling to write a blog post when you find out midway that you're enjoying learning in an entirely different manner.

This is totally my experience with OKRs as well. At my last company they were a feeble attempt by micromanagers to manage a bunch of teams into being self-managing. But at the end of the day they were giving us solutions not problems to solve, which obviously didn't work out well. Most of the software we wrote missed the mark and didn't get used.

Biggest complain I have with Marty Cagan is he always describes the perfect product organization and says anything else is shit.

Well you can't choose your execs, you can't choose your culture,.. so thank you for describing what is the ideal world, but almost no company will be able to apply your advice.

Again in this article he doesn't explain how to replace OKR, but just says, if OKR doesn't work then your culture is the issue

There's some idealism, but the rub with OKRs is that it's obvious that it requires a culture of individual empowerment, at least it always has been from my perspective. It's been a baffling reality of my own startup experience that OKRs get introduced by management teams that aren't willing to truly delegate decisions to individual teams (and perhaps individuals); despite the fact that the entire framework they just introduced, requires this in order to have any real value.

After all, what's the value in a tool aimed at providing alignment in decision-making, if nobody actually gets to make decisions?

So yeah, idealism, but of a form that's worth aiming for.

It's like the Martin Fowler of business organization

I had the same thought. What good is the nicest theory when all the practical implementations you can find basically abuse the name for something that is quite the opposite?


If your business relies on a blogger for its operating strategy, you are doomed.

Pithy and funny, but counterpoint: we've all surely noted businesses for which this would've been an improvement.

Well you can't choose your execs, you can't choose your culture

Can you choose where you work?

My company is adopting this. They just gave a big company wide meeting/sales presentation on this. I ended up tuning it out after a few minutes. Maybe that was a mistake but I am just so sick of the corporate gobbledygook dog-and-pony show. At the risky of sounding ignorant, these corporate productivity systems feel like cottage industries invented by consultants to sell to management looking for a reason to justify their salaries to investors.

The core concepts behind OKRs (and others like it) are pretty good. The problem is that almost all of these methods require managers to cede power and control over to the developers. And that rarely seems to happen. So as a developer it just becomes another layer of bullshit to deal with.

It pretty much boils down to: Set well defined, well reasoned, and slightly ambitious goals...communicate them to your developers...and then leave them the hell alone to try and accomplish them.

I’ve always found it shocking (well not that shocking honestly after years at 10 person startups to 5,000+ people companies) that you get these folks who read “measure what matters” or some cherry picked Andy Grove stuff & decide “this will fix our velocity and other problems”.

Like no, hard no. OKRs are great, I personally love them but if your company has deeper issues a set of OKRs won’t help and likely hurt as it obfuscates the underlying issues that exist.

Most times they don't even read the books. They just stop at "what doesn't get measured doesn't happen." Great, that leads people to measure all sorts of stupid stuff like lines of code to production. Who cares if it isn't something that contributes to a thing that someone's going to use.

The last company I worked for had a problem with the service, it lost their clients money. Their solution was to make everyone in the management team read a 'business book' per week, and discuss it in weekly meetings.

> ... still continue to tell them the solutions they are supposed to deliver – nearly always in the form of a roadmap of features and projects with expected release dates.

This has always been my experience with OKRs. They were implemented by directionless companies with weak management as an attempt to bring structure, but the real reason there was no structure or coherence is that nobody in leadership agreed about what the company should do.

They ended up being such vague, arbitrary, and ambiguous goals, like "dominate this sector" or "become a leader in that vertical", or "hire x engineers by y time" that they were effectively meaningless. What does dominate mean? What are these new engineers going to do once they're here? Nobody had answers to these questions, yet achieving the OKRs was paramount.

I'm sure there are examples where they work well, but I suspect this author is onto something and the companies that use them successfully would be well managed with or without them.

Most of the teams I've been on at Google don't even bother with OKRs. The ones that did, it was a bit of a dog and pony show, a poor substitute for just doing some regular iteration task planning.

It feels like OP has made a bit of a straw man out of OKRs. More charitably, perhaps we did not get our information from the same place - all I know about OKRs is from reading "Measure What Matters" and implementing it in my own company. Reading this article makes me feel like someone took an incredibly simple idea and decided that what it needs is more complication.

From TFA: "most companies are not set up to effectively apply this technique". Yes. And if you read the above book, you'll learn that this is sort of the point. You WILL fail at your first few OKR cycles, but you're supposed to use those experiences to change your company into one that CAN set and achieve objectives.

If you think your company is one where OKRs won't work, all the more reason to do them.

OKRs are about creating and communicating strategic short-term objectives across the company, so they remain top of mind. And that's 80% of what you need to know! The objections to OKRs mentioned in the article don't make sense to me, because it seems to follow a different definition of what OKRs are.

I am curious about your perspective. I work in a large-ish company (few 1000s people), I am managing ~25 people and this is the first organization where I used OKRs. I found them pretty worthless myself, and for similar reasons as the OT.

Fundamentally, OKRs only make sense if your organization is focused on outcomes, not on output. And underlying that, that's really about the culture in your org. It requires a good executive and sr management team to actually cascade the OKR into non direct financial terms. E.g. if your product objective is to increase retention by X, how do you translate this into a strategy ?

Also, OKRs don't make much sense if product/engineering are run separately. This is still extremely common outside of tech companies. The cascading, which I considered as one of the most fundamental difference compared to traditional management by objective, is often a productivity killer, because 2. If you do OKRs every quarter, and you have 3-4 levels to cascade to, you're gonna get your OKRs end of first month, which means you realistically only really have 6 weeks left in your quarter.

Finally, Measure what matters, I don't understand that book. I found it completely worthless as an Eng. Manager, with absolutely 0 actionable insight. It could have been 5 pages. The famous example of a football team is the only example that actually has enough details to explain things. On similar topics, high output management, or even hard things about hard things, were much more useful for a middle manager like me.

> all I know about OKRs is from reading "Measure What Matters" and implementing it in my own company. Reading this article makes me feel like someone took an incredibly simple idea and decided that what it needs is more complication.

I know this is HN and you could totally be Drew Houston; but I'm going to go out on a limb and say that your company has fewer than 500 employees.

The problems the OP is describing are, in my opinion, usually present at a large, bureaucratic company. There is no strict size definition for this (I've seen incredibly bureaucratic 50-person startups), but Dunbar's number [1] is a good rule of thumb.

> You WILL fail at your first few OKR cycles, but you're supposed to use those experiences to change your company into one that CAN set and achieve objectives.

Once you get into a bureaucratic company, it's largely about avoiding failure or the perception thereof. Since you own your company, it's easy for you to take this overall view of "if it failed, it failed". Any mid-level manager or individual contributor is going to be incentivized to avoid the perception of failure since it's super-bad in a large organization. Google is often used as a counterexample, but I'm not entirely sure that Google is a company that's good at making products. Also, Google uses the approach of "Hire extremely smart people and throw a truckload of money at them" approach which most companies under discussion (including yours) likely don't; which (IMO) is a far better predictor of success than OKRs.

> The objections to OKRs mentioned in the article don't make sense to me, because it seems to follow a different definition of what OKRs are.

This is the No True Scotsman fallacy. The reality is that the "cascading" nature of OKRs often gets lost in translation and doesn't take into account macro-level changes in your specific vertical. Yes, the company's goal is to become a leader in the field of underwater baskets; how does this translate to me rewriting that terrible frontend code that the cofounder's buddy wrote in 2007 and has never been touched since? Doing that translation will become more difficult year by year due to how technology evolves; and most people in "leadership" suck at doing that translation well. That's what the article is talking about.

[1] https://en.wikipedia.org/wiki/Dunbar%27s_number

Nowhere in the book is it pitched as a tool you roll out to a bunch of MBAs and hope that good things happen - it's never about out-of-touch managers demanding results. If someone cascades objectives without also cascading planning and estimation, are they not simply a bad manager?

What the book DOES say is that you should not create such a fear of failure that no-one will set stretch goals, and that OKR results should not replace individual performance evaluations. In short, OKRs are what you seem to think they are, plus the culture that makes it possible. If an organisation doesn't get that right, are they doing OKRs?

If your organisation does not allow developers to push back against impossible objectives, do you not perhaps have bigger problems that OKRs (or any other system) can't fix for you? Why would you blame OKRs - would any other system not also be painful when a company has become so dysfunctional? Also, did you know that all objectives don't need to cascade - some can be set bottom-up?

You're going to call No True Scotsman fallacy, and I'm going to call Straw Man, so I'm done with this thread. Perhaps I just fell for John Doerr's elaborately constructed fantasy that he uses to sell books. But right now it seems like a useful tool to keep a company on track, and I'll keep adapting it as the company grows.

I'll add to this thread that I was largely against OKRs, having seen them implemented in the style of "we need a quantitative measurement for success so let's make something up hastily", only to fall prey to Goodhart's Law[1].

But John Doerr's book introduced me to the bridge of concepts that I was not seeing: in a healthy setup, we are first and foremost focused on a qualitative objectives and THEN we attempt to model that fuzzy feeling with a quantitative measure (the "key result") that should reflect success. It takes several iterations in order to come up with a matching measurement, and even then we need to constantly re-evaluate whether the measurement is appropriate or if is devolving into a numbers game devoid of true objective.

In other words the full acronym is OAMBKY, or "Objectives, AS MEASURED BY Key Results". But that doesn't roll off the tongue quite as well.

So in that light, OKRs are a useful tool IF AND ONLY IF leadership—as well as the whole team—are focused on the philosophical, qualitative goal and are all aware that the measurement is only an imperfect proxy that is contantly re-evaluated to help us better assess the goal; not a goal unto itself. But that takes real leadership to drive that message (as well as avoiding setting up misguided incentives).

[1] When a measure becomes a target, it ceases to be a good measure.

> If an organisation doesn't get that right, are they doing OKRs?

I mean, this is kind of what the article is saying; just along a different dimension.

> Perhaps I just fell for John Doerr's elaborately constructed fantasy that he uses to sell books.

A lottery winner can write a book about their excellent saving habits and how they helped a lot. If you follow their advice, you may do quite well, and gain some wealth!

But the advice can also be largely unrelated to why they are wealthy; and people hoping to be multi-millionaires as a result of those habits might be disappointed. This does not mean that the lottery-winner's advice about being frugal and saving money is inherently bad. Go ahead and save, by all means.

The summary is "your company may not yet be worthy of OKRs". This kind of has the whiff of other religious movements in tech like REST and Agile: if it's not working for you, you just aren't doing it right. Try harder and hope to one day be worthy.

I'm trying to grasp what's wrong with that. That statement is true for every set of system/values.

A set of systems won't work for 100% of the cases because that system was designed around a particular team. And there are times it won't work well.

Whether the system is good/effective has nothing to do with "you aren't doing it right" part.

The "you aren't doing it right" can be applied to literally everything.

The common thread is that they're frameworks which promise to be very general and be applicable everywhere, but which in practice "everyone does wrong".

If no one can actually successfully use your framework the right way, maybe the framework isn't as generally useful as it purports to be.

Ah got it.

Having gone through scrum and OKRs I agree with the idea that people tend to sell them to be 'generally' useful at least on the surface.

But once we started doing training/testing towards scrum, it's very clear we needed change as an organization. Some people ended up leaving as a result because it's not everyone's cup of tea.

I can't say the same for OKRs because I joined and it was already in place pretty effectively, but I imagine it to be similar.

My only experience with OKRs has been having to do more OKR-management homework separate from the actual work I do, which is the same 'coding stuff off the product roadmap supplied by a different department' as always.

That's why as a manager I always set my teams okrs to be aligned with whatever crap business is going to place in the scrum stories.

I've seen tech OKRs that are things like "reduce tech debt" or "decrease loading time" or "reduce downtime" but without hard tangible stories in sprints, it's just wishful thinking.

> That's why as a manager I always set my teams okrs to be aligned with whatever crap business is going to place in the scrum stories

How could it be any other way (and work)?

It doesn't, but it is still done. A friend of mine who worked at IBM told me that they had "quarterly goals" that were stuff like "learn a new language" or "improve testing" or any other utopian idea, but their week-to-week sprint stories were completely perpendicular, so at the end of the quarter during the performance review the only people that achieved some of their goals were the ones that stayed to work on evenings and weekends (extra time) to do something related to their goals.

My CEO tried to explain to me that the "key result" part of OKRs didn't need to be measurable and then cited my colleague's as better examples despite none of them having written anything that could be used to hold them accountable. When I ask him if he's read High Output Management he just says "It's on my reading list". I'm leaving soon.

I've definitely seen it where objectives don't need to be measurable, but the results are what you measure to let you know if you're going in the right direction. Sometimes you might have objectives that look like results (like a top-line revenue target), but yeah... Good luck :)

We just started using them.

We're a small org that's mostly experienced consultants and contractors. They're energetic, proactive and independent people.

We've adopted OKRs to basically get everyone pulling in the same direction, but without being prescriptive about how the work gets done.

I'll report back at the end of the quarter on how it's working out, if anyone's interested :)

I am totally interested! But how will you report back?

I’ll reply here!

One of our senior leadership people read the book, got in his mind that we need this at the company, got together the managers of all departments, they figured out some OKRs and now everyone’s off working on implementing them.

Some of them are rather impossible, like teaching all devs on how to do devops within 2 months, so they can be self sufficient.

"teaching all devs on how to do devops within 2 months, so they can be self sufficient"

There's so much wrong with this key result it's incredible :( One big reason that OKRs can fail is just flat out bad OKRs. Teaching all devs to do "devops" within two months ignores the actual issue that they seem to be trying to solve (which I'm guessing is that not having sufficient ops is hurting the company and velocity). It's far too easy to fail and puts pressure on individuals rather than letting the team find the right solution.

This has flicked a lightbulb on for me.

If your teams are fairly self-organising, if you are small-a agile and building your process as you discover more, you can plan in things like “dev A will spend 50% of this sprint learning dev-ops, without doing feature work or tech debt; and 10% shadowing dev B who already does Ops”.

That seems actionable, and measurable. Over two or three sprints/iterations/retrospectives, you’ll probably be able to measure the impact of the training (more successful releases, higher uptime, or more story points covered because ops is less a bottleneck etc).

"teaching all devs on how to do devops within 2 months"

The 'ing' at the end of the verb suggests that this is an activity, rather than an objective nor a key result.

If your company is using the term "OKRs" to refer to things that aren't OKRs, then perhaps they could use some help from someone who has used OKRs successfully elsewhere.

The same thing could happen with any new way of working. Imagine a company where no one has used TDD before. Some leaders decide everything must use TDD from now on. People try to work in this new way but, lacking experience and role models, they write their tests at the wrong level of abstraction and end up constraining future refactoring instead of enabling it. Would such an experience mean that TDD is bad or useless?

Similar story at my place.

New executive hired who loves OKRs. Ends up not changing anything because the engineering OKR is “work on the product roadmap” aka what we’ve always been doing.

Only lasting impact is there are a few more slides in the company wide meetings.

Engineering is clearly run by malicious compliance geniuses.

> teaching all devs on how to do devops within 2 months, so they can be self sufficient.

At least this Key Result is measurable: 100% devs know how to do devops. :)

But who is the individual accountable for implementing this Key Result? For a task that broad, I hope a department manager is responsible for creating a training problem, not individual devs responsible for teaching themselves.

> teaching all devs on how to do devops within 2 months

Estimation based on initial guts should be quadrupled and rounded to the nearest quarter... so 9 months. This should provide a 90% percentile chance of success.

At my company, management has repeatedly said OKRs should target failure. So like if you think you can get a major feature set done in 15 weeks they want you to target 10.

I’m not a fan of it myself. The idea feels like growth hacking, but over many quarters it doesn’t feel great to always be falling short of goals. I always assumed this is part of the OKR model, but maybe it’s just my company.

By the next stage, you'll have products being delivered made up of 50 JIRA tickets with 250 bug tickets being moved to whatever your long term Maintenance and Support team is... but someone will get to play with themselves over the thought that it was delivered on time.

Before OKRs did you have any plans or goals or strategy?

How do OKRs relate to devops timelines?

OKRs seem a brilliant way to silo your teams and shut down any cross team collaboration.

OKRs become a shield to deflect any external request that doesn't precisely align with a measure.

Not a fan.

I'm certain you've got experience to back up this view of them, but you should consider that it's not a failure of OKRs, but of management setting OKRs, that what your teams OKRs weren't aligned with other teams. I've had similar experiences, but I've also had explicitly cross-team OKRs and OKRS that were broadly set on "Respond to needs of customer teams X, Y, and Z" that were among the most successful that I or others on my team picked up.

Collaboration should be a goal. A management structure that doesn't explicitly make it one is at fault, and they would have failed regardless of the organizational methodology they chose.

In my experience that still just pushes off the problem one layer. Now you've defined who the team can and can't interact with to an even greater degree than not having anything and it totally kills 'spontaneous' collaboration.

I am absolutely no rejecting the notion that maybe my company is very bad at doing OKRs but for me they have always resulted in "working to the metric" and have totally inhibit team autonomy - you become ruler by the KRs.

EDIT: the first time my company tried OKRs a company culture of collaboration and experimentation was brought to a shuddering halt as everyone stopped trying to do what was best for the business and started trying to hit their OKRs instead.

So you believe collaboration is good, even when it comes at the cost of measurable results?

I feel the opposite. There is too much cross team collaboration that "feels good, doesn't do good". OKRs cuts through the BS, and makes you evaluate if your collaboration is actually doing anything measurable. Focus typically has this effect.

No, I believe OKRs kill collaboration - nothing more nothing less.

But furthermore I think your comment is suggestive of the "You can't manage what you can't measure" fallacy that underpins OKRs.

It’s tough to collaborate when nobody has a plan.

OKRs are just a tool, but having been the founder of many startups, comes a moment when you need a tool to align your team towards a clear goal.

I understand most of the comments but as you grow your startup will come a time when it becomes useful when you have 70+ and even more thousands of employees. The bigger the company the harder it is to manage, google famously hated management and ended up with OKRs. My understanding from a couple of ex-googlers is that now most people are just gaming the system and aiming at a 0.7 score so I'm not sure if they are still as useful as they were at google.

OKRs / KPI etc are not magic, at the end of the day the article is right, the team is what matters, how teams align and communicate with each other is what OKRs can help with (I find them better than the old top down KPIs personally) I was asked by a startup to explain OKRs and I kept getting the CEO to ask me questions related to them not making sense (e.g some teams argued that goal X did not make sense to them etc...) For me this was not an issue with the OKRs per se, but OKRs helped to show very clearly that the organization was not well structured for the general goals, so worst case if OKRs don't work you should be able to tweak things and measure the impact (that's the all idea : measuring to improve)

Tell people what's important and what isn't. How innovative. OKRs are for managers who are incapable of telling their staff what's important.

Easy enough in a small org, much more difficult in a big org. Part of the point of OKRs, when done right, are that it shows how the thing you're working on today ladders up to the big thing that drives the entire company forward. That said, in reality it's rarely executed well or with any precision whatsoever. I worked at a 1000 person place whose top level OKRs were so vague they were garbage. You could have created any feature whatsoever and shoed it into those.

The downside of OKRs is that it incentivizes everyone to sandbag their efforts and then stick to that effort. Also no one is going to stick their neck out on goals which are ambiguous or lack clarity. There is just no incentive - in fact there's a massive dis-incentive.

If you have a startup with less than 300 people and you have good communication and goal setting, don't use OKRs. Use KPIs and then nehlp your teams to keep iterating.

Input management is so much more effective than output management but it needs visionary leadership. Read: High Output Management - Andrew S Grove (ex Intel CEO).

Considering High Output Management literally advertises Management By Objectives, I'm confused how that is a citation for your point about KPIs vs OKRs, they should not look that different.

If you're doing OKRs without some type of indicator, kpi or otherwise, I don't see how you're achieving key results.

> The Role of Leadership > And finally, and really the root of the problem, is that in the vast majority of companies I see that are struggling to get any value out of OKR’s, the role of leadership is largely missing in action.

Here, here!

OKRs are being rolled out at work, and there's a pretty clear managerial vacuum. You could call it the KRO: leadership makes a request that trickles down to line staff for projects and goals, then each layer of management tries to glue them into a coherent Objective.

> The main idea is to give product teams real problems to solve, and then to give the teams the space to solve them.

First, there's a dearth of high quality engineers in the industry. They've been gobbled up by SV companies with large paychecks and promises of "fuck-you money" paydays. Everyone else gets developers in the range from above average to bad.

Product teams only work with high quality engineers across the entire product. Why? Because if they're shitty, there's no bound to creep of shit code, call it "shit creep" (see footnote 1)

How do you fix a shitty product team, other than replacing the entire team, or product with hopefully a better team?

Sure, feature teams are more limited in scope and their abilities, but they at least constrain the shittyness to a feature. If the feature needs to be replaced, the team can just be replaced, not the entire product.

(1) We've all had to deliver shit code from time to time. The thing is the good engineers know it's shit and refactor it as soon as they can.

You will enjoy your career much more if you quit it with this inverse Lake Woebegone philosophy.

The idea that all engineers outside of a few Silicon Valley companies are mediocre is laughable if you've ever worked at a FAANG. People pick all sorts of jobs for all sorts of reasons.

I personally like OKRs, as I find them a valuable counterbalance to 1) daily standup, which can promote short term thinking, and 2) yearly "goals" in performance reviews, which, when evaluated in a binary way as completed/not completed, can deter people from taking on more ambitious and ambiguous tasks.

My big worry about OKRs is that they will descend into the same shitstorm that ruined SCRUM. Daily standups were never intended to turn into a daily opportunity for micromanagement and application of deadline pressure, but this is exactly what they became. And I can't say it was an external corruption of the process - the dangers are inherent to the technique. I always felt that saying standup and scrum are good, you just have to be careful someone external to the process doesn't corrupt it don't turn into micromanagement is like saying nitroglycerine is good, you just have to be careful some chemical reaction external to the process doesn't cause it to suddenly and unpredictably combust. The instability is inherent to the substance.

There was also a mini (or not so mini) industry of industry consultants helping people "do agile", that elevated "processes and tools" over "individuals and interactions", kind of how the 10 animal commandments in animal farm ended amended to mean the opposite of how they were written down.

I see a danger in point in OKRs, in that they're a great way for a manager to turn a casual projection into a committed estimate. I see the inversion of "Customer collaboration over contract negotiation" all over again here. The developer (or employee) is lulled into a belief that we're all friends and can trust each other, the "customer" (or client or boss) is lawyering up with a contract, and when time comes, the developer's feet will be held to the fire with firm commitments, and subjected to daily deadline pressure and micromanagement (because agile).

All I'm sayin' is, this could go sideways. But then again, so can any job.

Not very sure if OKR should replace the KPI in most of IT companies since OKR mostly aims at developers. I've experienced one company that request their employees to do OKR weekly and monthly. It actually helped a lot at the beginning coz everyone is curious and energized.

But after a few weeks, indolence came out because someone forgot to submit the OKR. And other members also do the same thing. Pretty much of a "broken window effect". The workload was very high, so OKR wasn't attached any importance. Or maybe we did it in a wrong way.

It doesn't mean that we might not have to align with OKR, but the method of implementing the OKR insistently and continuously should come first before it takes places.

> OKR mostly aims at developers

Ops can totally have OKRs. Fewer incidents, faster resolution, faster deploy times, longer log retention, decreased AWS spend, the list of objectives I can think of off the top of my head is quite long.

Same for project managers: on time projects and reporting, better estimates, faster issue triage, more a/b experiments, more features launched.

OKRs done well are effectively a method of improving KPIs by recursively breaking work up into subtasks.

> OKRs done well are effectively a method of improving KPIs by recursively breaking work up into subtasks I strongly agree. (thumb up)

Well, of course Ops can have OKRs as an HR evaluation method. KPI's shortcoming is obvious, most of the results need to have a quantified indicator which is not very appropriate for engineers' situations.

But it still can be used as a complementary of OKR to provide clear numbers or date time as a target.

I just guess that iterating the OKR every week and month is quite challenging. And most of the OKR stays unchanged even after months of iterations. HR and managers need to figure out the way to keep everyone update their OKR effectively.

IMO, OKRs should be quarterly targets. Enough time to meaningfully make and measure changes repeatedly.

Good piece

I worked at a company that used OKRs but the teams weren't set up in a way that people had responsibility for the things that their OKRs wanted them to do, and it produced very little (thankfully the teams were generally good and it wasn't a disaster, but it was a waste of effort).

If the OKRs aren't embedded into the sales teams as well (which basically means you can't have separate sales teams) then it likely fails.


Quote: "OKR’s are first and foremost an empowerment technique."

Don't empower anyone. Instead work on getting rid of / minimizing / improve on anything that dis-empowers. This is usually easier, cheaper and far more straightforward. Not always easy to get rid of disempowerment but you can at least put things in place to minimize it. Empowerment is barely measurable in real terms. All the metrics are indirect. Even turnover doesn't accurately measure it. Dis-empowerment on the other hand is a stinking mess. Follow your nose. Or your heart.

Quote: "Manager’s Objectives vs. Product Team Objectives"

One of these needs to be fired / removed / re-evaluated. If the manager is fighting the team then who's more in alignment with the overall vision? Why would you tolerate a manager that is fighting their team? Or vice versa? If the team members are across leaders and there's issues then there's some misalignment that needs to be fixed. Now. Iceberg! Change direction or sink!

Pointless drama and deliberate friction created in large organizations is why they squash innovation. Its why they do mindbogglingly stupid things. Its also why a manager or executive can benefit when a team fails, even their own. Incentives can be so screwed up that if a division fails, a number of people in the division cheer because their individual incentives are all green. This is too common.

Another quote: "The main idea is to give product teams real problems to solve"

You're doing WHAT?! How can you possibly tolerate non-real problems? Why would this be even necessary? That's like saying, "it gives them a way to generate profit". Really? That's a thing you're going to add?! As if its some new thing? What?!

Quote: "Passive manager"

What is this creature? How can you passively manage anything? Not even drunk people are passive. Only when they become unconscious do you start to have passivity. I've met managers who's teams zip along with the manager gone for a month. This isn't rare and its still not passive if adult supervision is in place. Why? Because an active manager puts processes in place to monitor and optimise what is going on. This is active management.

Quote: "Stop doing manager objectives and individual objectives"

Correct. Any mismatch means pointless friction. Why was this tolerated? Perhaps because drama at the bottom and middle keeps people too busy to notice the silliness at the top? Maybe. Regardless, it sounds expensive: How's that productivity going?

Quote: "Leaders need to step up"

Sure, and they should be allowed to lead. Which is often NOT the case. Plenty of managers leave team leaders in the dark about key aspects about what is going on and what is planned. Leading means choosing a direction. How can you know the direction to choose if you don't know the destination? This applies generally as well. Instead of expecting leaders to step up, why is there a step in the first place? Shouldn't there be a level playing field? Sort out organisational mess so its as level as possible - remember the bit about removing disempowerment? I wasn't kidding. Here's a symptom: the need to step up when no step should exist.


Go for enabling and self-directed team members that can tolerate and operate successfully with adult supervision. This kind of supervision requires managers, directors and team leaders as well.

One last thing: Get rid of deadlines and use due-dates instead. Not just wordplay. We need to be using project X on date Y. Make it happen and don't drop dead doing so. Change the mindset to fit this approach. Those deadline crunches? Not good. Dropping dead after a due-date means something went very wrong. Everything must be re-examined to avoid it in future. The mess has to be cleaned up. Supports put in place. Additional followups scheduled and kept. This works for projects as well as childbirth.

Ok. I feel decaffeinated and that is dis-empowering. Time for coffee. See? Fixable.

Agree that the method is not guaranteed to be a success-maker. Our experience adopting in a small healthcare thinktank was difficult -- the pattern didn't really fit our present and future workflows very well.

> Please upgrade to a supported browser to get a reCAPTCHA challenge.

Why is this happening to me?

I see this at the bottom of the page, and I’ve been seeing it more and more. I’m using the Duck Duck Go browser on iOS.

An educated guess: reCAPTCHA has become (partly) passive, using whatever tracking data Google has about your behaviour. So, if your browser blocks the tracking, it causes issues. I assume that's what the DDG browser is about.

I really ought to read an OKR book, because the telephone-game version I hear about seems problematic.

For example, Austin's Measuring and Managing Performance in Organizations[0] gives a helpful 3-party model for understanding how simplistic measurement-by-numbers goes awry. He starts with a Principal-Agent and then adds a Customer as the 3rd party; the net effect is that as a Principal becomes more and more energetic in enforcing a numerical management scheme, the Customer is at first better served and then served much worse.

As a side effect he recreates or overlaps with the "Equal Compensation Principle" (described in Milgrom & Roberts' Economics, Organization and Management). Put briefly: give a rational agent more than one thing to do, and they will only do the most profitable thing for them to do. To avoid this problem you need perfectly equal compensation of their alternatives, but that's flawed too, because you rarely want an agent to divide their time exactly into equal shares.

Then there's the annoyance that most goals set are just made the hell up. Just yanked out from an unwilling fundament. Which means you're not planning, you're not objective, you're not creating comparative measurement. It's a lottery ticket with delusions of grandeur. In Wheeler & Chambers' Understanding Statistical Process Control, the authors emphasise that you cannot improve a process that you have not first measured and then stablised. If you don't have a baseline, you can't measure changes. If it's not a stable process, you can't tell if changes are meaningful or just noise. As they put it, more pithily:

> This is why it is futile to try and set a goal on an unstable process -- one cannot know what it can do. Likewise it is futile to set a goal for a stable process -- it is already doing all that it can do! The setting of goals by managers is usually a way of passing the buck when they don't know how to change things.

That last sentence summarises pretty much how I feel about my strawperson impressions of OKRs.

[0] https://www.amazon.com/Measuring-Managing-Performance-Organi...

[1] https://www.amazon.com/Economics-Organization-Management-Pau...

[2] https://www.amazon.com/Understanding-Statistical-Process-Con..., though I prefer Montgomery's Introduction to Statistical Quality Control as a much broader introduction with less of an old-man-yells-at-cloud vibe -- https://www.amazon.com/Introduction-Statistical-Quality-Cont...

> Put briefly: give a rational agent more than one thing to do, and they will only do the most profitable thing for them to do. To avoid this problem you need perfectly equal compensation of their alternatives, but that's flawed too, because you rarely want an agent to divide their time exactly into equal shares.

I would argue the system is working as intended. Contrary to your assertions, you don't want employees spreading effort like peanut butter, you want to focus them on executing one or two things quickly and getting value out of that quickly. Instead of launching 12 features a year from now, I'd rather launch 1 feature a month.

> you cannot improve a process that you have not first measured and then stablised.

There is of course, a certain amount of reasoning under uncertainty involved. One of the lessons many folks learn from a/b testing and OKRs is just how hard it is to actually make a difference, and folks need practice calibrating.

> Contrary to your assertions, you don't want employees spreading effort like peanut butter, you want to focus them on executing one or two things quickly and getting value out of that quickly.

That's not quite what I was driving at. Optimisation is made on the measurement. Measurement is only necessary because the Agent is not perfectly observable, there is an information asymmetry between Principal and Agent.

That's why Austin's model is so helpful. There are many things that must be done in order to best satisfy the Customer. Some of those are measurable, some are less measurable. But a rational Agent looks at any basket of measurements and will optimise for one of them: the one that pays best.

It's not enough to say "just this one feature and no peanut butter please". You have to define what the one feature is. You have to provide an exact measure for it. Agents can then either optimise honestly, or they can go further and optimise fraudulently. If honestly, the Principal realises that they actually need a basket of values to be optimised. But then they need to apply equal compensation, because the Agent will simply ignore any measurement that doesn't maximise their results.

I believe measurement is useful. But I also believe that connecting it to even the whiff of reward or punishment is beyond merely futile and well into being destructive.

> I really ought to read an OKR book, because the telephone-game version I hear about seems problematic.

I've read John Doerr's Measure What Matters OKR book and personally used OKRs for a few quarters. Google's re:Work site about OKRs is short and adequate summary:


Objectives and Key Results for those who are confused.

BD needs to generate enough sales to keep the engineers stressed out. When you don't have that, you end up with politics. OR the company is pre PMF, and everyone needs to operate in that mode.

What is BD, OR and PMF?

Probably business development and product market fit. The "OR" I think was just an emphatic "or"

BD = Business Development, aka sales

PMF = Product Market Fit

Unsure on the OR.

It think that's just a regular old Or gate.

(BD -> Eng Stress) ^ pre PMF

I didn’t read the article. Is it because OKRs don’t work anywhere at all, and are only successful when they are ignored? At Google that’s certainly the case; the better a group is at paying lip service to OKR planning, the more work they actually get done.

Its a 2 minute read, go see for yourself

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact