Some Notes on Executive Dashboards

Command & Control & Confusion

Why are executive dashboards so bad?

In my consulting work, almost every company has lackluster reporting and dashboards. These days it’s less a case of completely missing reporting (though that still happens) but rather the things the executive team are looking at regularly lack any real insight into the business.

Most of my consulting work revolves around putting together some kind of strategic plan. It usually boils down to a kind of basic equation, something like:

“If you invest $$ into activities X, Y and Z then over 2 years we can achieve $$$”

You secure buy-in from the key players, grab the money and get to work.

Oddly though - companies only tend to measure the right hand of the equation.

Company dashboards are designed around metrics and measurement of results - they’re trying to measure what has happened.

Measuring what happened is important, obviously. But it’s a bit like driving a car only looking in the rear view mirror…

It’s also important, however, to measure what is happening.

Unfortunately in my consulting work most companies don’t have any kind of measurement in place for the left hand side of the equation.

📈📈📈

Maybe there’s a blind spot in my consulting. When you put a plan together that says “If you invest $$ into activities X, Y and Z then over 2 years we can achieve $$$” - then there’s some kind of assumption, either explicit or implicit that activities X, Y and Z will produce results.

It’s kind of obvious that you have to find evidence for this historically - I like to show how investing in these activities has paid off previously, or how a similar situation worked out for a similar business.

But perhaps I could better articulate how this future investment will play out. Not just a business model showing X, Y and Z with revenue potential, but actually showing how you would measure progress on each initiative. It always feel implied to me that when you invest in a plan you need to measure progress but I think I could be more proactive in bundling the measurement plan with the pitch… Hmm.

📈📈📈

The book Working Backwards explores this idea - Amazon calls them “input metrics” or “controllable input metrics”. From the book:

Input metrics are like measuring the left hand side of the equation! You’re measuring the things that supposedly impact the revenue. Today’s measure of revenue is not a good measure of tomorrow’s revenue - input metrics are better.

Interestingly - it’s quite hard to find the right input metrics, it’s not always obvious exactly which input metrics actually influence future revenue.

I think the basic working model for people is that metrics measure the business, when in fact input metrics help you learn about the business. By iterating and refining your input metrics you actually become a stronger operator - you learn more specifically which levers actually get results. Dive into the full post from Cedric Chin here for a bit more.

📈📈📈

But, there’s something deeper here. Over the last few years I basically only work with the C-suite of organizations. Supposedly the “people in charge”. But time and time again my point of contact is frustrated at the state of reporting internally, while also not doing anything about it. So why not fix it?

Output metrics feel neutral. They’re observations about what happened - so it’s hard to argue with them.

Input metrics, on the other hand, are more opinionated - as we just saw they’re not perfect measures or predictors of future revenue and in fact you might iterate and refine them over time. You might disagree with them!

This brings a power dynamic into play that I find interesting. Senior executives - CEOs or founders even - who feel unable (or unwilling) to impose new dashboards and metrics on the business. Everyone is scared of micro-managing. Perhaps also senior executives don’t feel confident understanding the mechanics of the actual work well enough to oversee the creation of input metrics?

📈📈📈

Dashboards are a battleground for power in other ways too.

I often see teams frustrated that the way they’re measured doesn’t accurately reflect the effort / nuance / expertise / care that they feel is necessary for their work to succeed.

But I rarely see teams advocating to change the measure!

This is the metrics mindset that only measures concrete outputs. But you have the freedom to make your own measures. I recall working with a content publishing business (think someone like Wirecutter) where we were trying to nail down some measure of “quality content”. Not a simple problem - and certainly one that’s hard to find an objective metric for. But eventually we got a few senior people around the table, created a simple 5 point scale on a few key areas, and then asked everyone to rate content subjectively on that 5 point scale.

If I remember correctly it was questions like:

  • “Is the summary of the page clear within 30 seconds?”
  • “Can you immediately tell that it’s written by experts?”
  • “Have we demonstrated that we did hands-on testing of the products?”

Everyone scores the content, you average the scores and create a blended “quality score” for content. This in turn creates a metric that you can use to measure some of the intangible “content quality” ideas that the team felt was important, but wasn’t reflected in the existing dashboards.

Once we got this quality score added to the dashboards it wasn’t long before the CEO was demanding that we increase the average content quality of our pages.

You manage what you measure. So think carefully about how to measure what you want to be managed by.

📈📈📈

I’m very interested in what Doubleloop is building. They’re basically a kind of strategy canvas where you can plug various input metrics into output metrics and measure them with live data. Here’s a quick overview:

I like this idea - that we should be questioning and exploring the dependencies of our strategic plan directly! We make a bunch of assumptions in the initial strategy pitch and then…. never go back and check whether we were right? Seems kinds of nuts.

📈📈📈

I’m also interested in what Variance is building. Starting with the opinionated thesis that Product-Led Growth should enable prospects to engage, sign-up, set up billing and then actually use the product - Variance is building a reporting product that allows you to see prospects on an account by account basis as they move through various “milestones” of user action:

image

I’m interested in this because I think it’s smart and useful - but also because it’s building software around an embedded thesis or ideology. IF product-led growth, THEN here’s the CRM product for you… I think we’ll see more of this kind of opinionated software in B2B emerging.

📈📈📈

My brother is raving about the book Four Disciplines of Execution (summary). The book has a similar notion to Amazon’s around leading indicator vs lagging indicators. But they also have this idea of scoreboards.

Mmm. I like this notion of “Player scoreboards” vs “Coach Scoreboards”. It reminds me that every single dashboard is, implicitly, an exercise in incentive design. By choosing what goes into the dashboard you’re emphasizing what’s important, and what’s not.

The medium is the message, you manage what you measure etc etc.

📈📈📈

Talking of the “medium of dashboards” - I’ve been spending a lot of time in Google Data Studio recently and I really appreciate the idea of a blank canvas to design layout and reporting on top of. It implicitly encourages layout as a primary activity.

Yes, it’s technically possible to do all kinds of fancy design in a spreadsheet too - but any design or styling work you do is more fragile in a spreadsheet and, mostly, people don’t bother.

I mean - like any design tool I see plenty of Google Data Studio reports that make my eyes bleed! But on balance I like the notion of starting with a blank canvas. A data studio report feels very different to a spreadsheet report. It forces me to more clearly make tradeoffs between visual hierarchy, position and relation.

📈📈📈

Another nice thing about Google Data Studio - it allows you to separate access to the dashboard from access to the underlying data source. So you can safely circulate a data studio report to various stakeholders. This can be handy since a dashboard is only as powerful as it’s a shared object.

The more teams use dashboards and rely on them the more they become cemented as powerful.

I’ve written before about the age of permeable organizations: the idea that organizations increasingly have a series of orbital stakeholders with a blurring of the boundaries between “inside” and “outside” the organization.

Maybe DAOs are relevant here?

If quarterly reports are the traditional way of exposing data outside the organization - a realtime dashboard is the web3 way of doing it? DAOs are (optimistically) the modern business structure designed for orbital stakeholders.

And maybe (maybe!) a DAO that tokenizes input will naturally have a leg up - their dashboard by default will show input metrics and output metrics…

📈📈📈

But maybe dashboards don’t even need metrics or numbers on them at all! As we move towards an oral and visual culture with video, memes and social media, maybe dashboards need more rich context? Again, from Amazon:

This tracks nicely to what I see inside organizations. Too often user research is a one-time activity, and often buried deep inside the product or marketing org. It’s not a strategic activity. What would it look like to structure user research at a strategic, executive dashboard level? Maybe something like Amazon’s voice of the customer.

📈📈📈

So, I know that every client I work with needs help setting up better dashboards. But I also know that a dashboard is a powerful object. Changing it requires bravery and nuance. To recap these ideas - here’s some ways to interrogate your own reporting setup to see how you might change:

1. Qualitative vs Quantitative

Is your dashboard raw data or is there some post-processing? Are you using expertise to create gradings or analysis on top of the data? Is there a voice of the customer segment for your reporting?

2. Input vs Output

Are you reporting only on what has already happened or are you showing what’s happening now? Obviously you need both, but in my experience companies rely too heavily on output metrics.

3. Flexible vs Fixed

How often do you update the metrics you’re reporting on? Are you explicitly designing your metrics to be updated? What’s your feedback loop for checking that inputs actually lead to the right outputs?

4. Open vs Closed

Who gets to see the dashboard? How do you ensure that everyone is on the same page? Are you capturing the potential value in a world of orbital stakeholders from opening up the dashboard to a wider set of people?

📈📈📈

Maybe dashboards and strategic plans need to be more closely entwined?

Update #1: Some good links posted on twitter:

This blog is written by Tom Critchlow, an independent strategy consultant living and working in Brooklyn, NY. If you like what you read please leave a comment below in disqus or sign up for my Tinyletter.