Gokul Rajaram on why he doesn't celebrate feature launches


Gokul Rajaram has spent the last few decades leading product teams at world-class companies like Google, Facebook, and Square.

We sat down with Gokul, now on the executive team at DoorDash, for a frank discussion on building remarkable products, including:

  • The traits that the most successful product teams share
  • Why your feature launches shouldn’t be celebrated
  • The role of the product manager as an “editor”

Read on for the full discussion with Gokul.

What’s the right balance between customer usage data, customer interviews, and general intuition when you’re trying to build the right product and product roadmap?

As product builders, we need data on what customers do versus what they say they do. Good product teams do one or two customer interviews every week as a matter of course, where engineers, designers, product managers, marketers, and analysts are all on the call. Customer interviews are valuable in uncovering problems, but data is the best way to develop hypotheses around the right solutions for solving those problems. I think of it as breaking product development into two parts: One, what is the problem we’re solving, or the customer behavior we want to change? Two, what are all the possible solutions to that problem?

Those solutions manifest themselves in the form of features. For example, say your conversion rate from website landing to sign-up is low. There are customers who land on the website who aren’t signing up for a service. That’s a problem. And then there are five different ways that you can solve it. That’s where data can help—both from customer interviews and your product.

Ultimately, every feature is an experiment designed to test a hypothesis. And I think product development teams need to understand that. A feature is an experiment to develop a hypothesis around a customer problem—that this solution will lead to a change in customer behavior. So it’s important to clearly articulate what the hypothesis is and run the experiment. This way every feature is a two-way door. It shouldn’t be a one-way door.

To that end, once you put something out there, when can you be confident that something is actually working versus confirmation bias kicking in or acting based on organizational inertia? In other words, how do you think about actually walking back through that two-way door—and creating a culture that encourages doing so?

I think one of the biggest traps that companies fall into is becoming “feature factories”, where the goal of the company is to launch a feature and declare success when the feature is rolled out. When you ask many product teams what they accomplished, they will rattle off a list of features without truly explaining why the features mattered. What customer behavior did it change? What impact did it have?

The key is to make sure you clearly articulate the hypothesis behind the feature you’re launching and try to falsify that hypothesis as early as possible. As long as the hypothesis is still valid, you keep iterating. Because customers want and expect us to commit to the products we’re selling them on. They’re paying us in many cases. They’re relying on us. If we aren’t committed, why should they be?

At Square, our mantra was “launch is not success.” What this means is that every launch needs to have a clear hypothesis. And the hypothesis has to be measured in terms of customer behavior change—not in terms of revenue and so on. For a large enterprise customer, signing up is a customer behavior change. They weren’t a customer before, and because of this feature, they became a customer. Whatever the case may be, there has to be behavior change articulated and met before you can say a launch was a success.

How do you view the role of a product analyst in product development teams?

Many companies use analysts as basically BI teams to build dashboards. But the real value and power of product analysts is to deliver that next level of insights—an understanding of which customer behavior changes actually drive business metrics. So that’s where your analytics team comes in. They’re constantly asking, “What are the inputs that ultimately drive business outcomes?”

Analysts essentially run experiments outside of the normal product development process to help us understand what bets to make and inform product teams with this data.

When you’re making these bets, how does your approach differ if it’s an incremental effort versus a brand new product direction where you may not have as much data and are relying on intuition or your understanding of the market?

Ultimately, everything is rooted in customer problems and customer behavior changes in the early stages of a company. I think of leapfrog efforts as those that can move customer behavior by a large amount versus a smaller amount. You should always try to go for big things until your company is at a massive scale. In order to do that confidently, you need to understand which customer behaviors have the biggest impact on your output metrics.

There are two phases to this. First, you’ve got to understand which customer behaviors to focus on. In the hierarchy of things, there are business metrics or the things that your CFO cares about—revenue, profit, etc. Second, there are customer behaviors that drive those metrics. Examples could be the number of new customers that we acquire, the number of existing customers that come back, and maybe things like usage or retention. And then perhaps there’s a subset of customer behaviors that are huge drivers of your key output metrics.

Once you understand which behaviors to focus on, you can think about taking big swings. A lot of product teams nibble around the edges and make cautious bets—a 1% change here, or 2% change there. Young, growing companies should be striving for 20%, 30%, or even 50% change in customer behavior.

For companies that don’t have infinite resources (i.e. most), how do you set a boundary or a risk threshold for those 50% changes? How do you define that risk appetite?

I think the key is to set a check metric along with your north star metric—a guardrail. For example, for a marketplace product, you’ll want to make sure that a change in pricing that increases demand doesn’t cause profitability to decrease. When teams set singular metrics, it can be a very risky and disruptive thing. I actually wrote a whole Medium post on this topic.

Basically, when you’re solving for a behavior with a check metric in place, it eliminates the risk of doing a bunch of things that might negatively affect your actual north star metric.

What kind of behaviors have you seen amongst product teams that continue to ship really successful features? What are the behaviors of those product teams, and how do they improve over time as they build new products?

There are a few common behaviors I’ve seen. The first relates to product vision and strategy. And by product vision, I mean the team’s understanding of the customer behavior that they’re trying to move and how it translates back to the business metrics. Product, design, and engineering all need to have that shared sense of mission and purpose. I think that unless a product team truly understands the customer behavior they’re trying to move and why it’s important, they won’t feel empowered.

Second is the role of the product manager in not only scoping and shaping the problem, but also as an editor. You can come up with a huge list of ideas for any problem that’s presented, but it’s up to the product manager to stay focused on the small number of things that truly matter and to cut things out that don’t. It’s important to have an iterative loop with engineering and design—often the people who have the most context—and to let them go really deep to actually solve the problem.

Third, it’s vital that the team is autonomous and durable. A lot of companies try to create these task-based squads that are there for a quarter and then they disassemble the squad and move on to something else. But that doesn’t allow teams to master their understanding of customer behavior, the solutions to those behaviors, and how to stack rank them when the time comes. You can’t innovate, and you also don’t get to know each other. I don’t really think you can move a product team like pieces on a chessboard.

Chemistry and skillset diversity make up the fourth piece. Your team needs to know about each other and respect each other enough that everyone feels comfortable contributing, raising suggestions, and challenging conventional wisdom so you don’t have groupthink. It shouldn’t be that the product manager says, “Here’s what we are going to build,” and the engineers and designers just go off and build it. That’s not the way effective product teams work. It has to be a respectful back and forth.

And finally, the last piece is data and analytics. When it’s finally time to launch a feature, think of it as a potential solution or an experiment to test your hypothesis. The feature launches are not in themselves something to be celebrated. They’re basically framed as an experiment template. It should be: “Here’s the hypothesis we have. Here’s an experiment we’re going to run and it’s called, ‘the feature’. And here’s the population that we’re going to expose this experiment to.”

The role of the product manager, to some degree, is also to get adoption for this feature and make sure that people are actually exposed to and using it in the right way. How (and when) should product teams think about marketing the feature they’re releasing?

I would actually add marketers to the product development team, because marketing is a critical part of figuring out how a product or feature could be discovered. Our marketing teams will always say that marketing’s job is to amplify what’s happening in the product. If the product team doesn’t do its job well, marketing is not going to be able to do something to change the fundamental nature of the product. But if marketing is deeply involved, they can figure out clever ways the product experience itself can be marketed.

For example, we had a product at Square called Cash App that lets you send money to people. They spent $0 on marketing. Why? Because the P2P nature of the app is the marketing strategy. And so the marketing teams were essentially embedded in the product team, figuring out how to make the product functionality work. Similarly at Facebook, there’s a team called Product Growth, which is basically a growth marketing team, but they don’t have any dollars. They’re responsible for figuring out how to reduce friction in the product experience and make it more delightful so people want to tell others about it.

To take it a step further, the performance reviews of everyone working on a certain customer behavior should be identical. Rather than just looking at whether someone is a good product “manager” or a good product “marketer,” there should be a fuller consideration of whether the product they worked on solved the customer problem, or not.

So TL;DR, I think it’s a very important thing for marketing to feel part of the product development process. They should be trying to figure out how to make the product itself become the main marketing vehicle versus being this external thing that they’re putting around a product.

Conway’s law suggests that companies will essentially ship their org chart to customers and go build software in a way that matches their org structure. How do you think about structuring organizations to deliver the best customer value?

The best way to solve for this is to have teams focused on customer behaviors and customer outcomes, so really end-to-end, with the ability to work across functions to make it happen.

The data team should own the underlying app or web platform, but the product teams need to own customer outcomes. So for example, you should have a product team that owns new user acquisition. And to meet their outcomes, they’ll collaborate with marketing, engineering, design, analytics, and even sales. That’s at the highest level, and then there should be a product team that owns customer engagement and looks at whether customers are using the product daily, for example. Then there are probably sub-customer behaviors that need owners, too.

I think it comes down to breaking down your business metrics into various inputs that drive the customer behaviors that matter most to you. Giving ownership to the teams to figure out what those inputs are can be very powerful.

You’ve been part of really impactful product organizations like Google, Facebook, Square, and DoorDash. What’s a core learning you’ve gained from each of those experiences that’s really stuck with you?

One thing I’ve observed at each of these companies is that I think it’s important to have a timeless mission that can be extended over decades. People always harken back to the mission. And I think it’s important for the leadership to live the values behind it. People want to join a mission-driven company.

The second thing I’ve taken away is there’s no singular path to success for a company. Each company’s operations are very different. They think completely differently from each other, but their structures, processes, reward systems, and values all center around that one thing they care deeply about.

I think it’s hard for companies to try to be a design-focused company and a tech-focused company and a growth-focused company and an operations-focused company. You need to understand what it takes to win in your space and double down on that. For example, AWS probably has a distinctly different culture than Amazon, because that’s what’s needed. At the end of the day, greatness has no formula—but the best companies are consistent. They know what they’re about at their core, their people know what to expect, and their leaders live and breathe that.

About Gokul Rajaram

Gokul Rajaram has served on the executive team at DoorDash since November 2019. Prior to DoorDash, he worked as Product Engineering Lead from 2013 to 2019 at Square, where he led several product development teams and was a member of Square’s executive team. Gokul also formerly served as Product Director of Ads at Facebook and as a Product Management Director for Google AdSense.