Note: this article is the last part of a series called Accelerated Velocity. This part can be read stand-alone, but I recommend that you read the earlier parts so as to have the overall context.
So far in this series I’ve shared thoughts on how to do things right – how to leverage best practices and develop skilled practitioners to get excellent results. Doing things right, however, doesn’t mean you’re doing the right things; it could mean you’re just doing the wrong things much faster.
The hard truth is that doing things right is easier than the doing the right things. The path to the former takes hard work but is relatively clear and straightforward. The path to doing the right things is considerably more opaque and mysterious. Just compare the number of books and blogs describing how to build software vs what to build with software to see the impressive gap between the two.
I’ve spent most of my career working to do both. My primary responsibilities as an engineering leader have been to ensure the team is working effectively and efficiently. But in my various executive and consulting roles I’ve had both the opportunity and obligation to be a thought leader in the areas of business, product and platform strategy. Through these roles I’ve developed a deep respect for the challenges and upsides of choosing the right path. I’ve also learned that an engineering leader who isn’t concerned with the question of “are we working on the right things” is doing their team a huge disservice.
There isn’t a formula or cookbook I’ve discovered that guarantee success, but I’ve found there are several ingredients which radically improve your chances of doing the right things as an engineering team. We do all of these – some better than others – at Bonial.
Data / Situational Awareness
You can’t make good decisions about where to invest if you don’t know what’s going on with your systems or your users. In a previous article I discussed at length why this is critical and how Bonial developed situational awareness around system performance and stability.
Its just as important to know your users. Note that I didn’t say, “know what you’re users are doing”. That’s easy and only tells part of the story. What you really want to know is “why” they are doing what they’re doing and, if possible, what they ”want” to do in the future. That’s tough and requires a multi-faceted approach.
For this you’ll want both objective and subjective data to create a complete picture. Objective data will come from event tracking and visualization (e.g. Google Analytics or home-grown data platforms like the Kraken at Bonial). Subjective data will come from usability studies, user interviews, app reviews, etc. Combined, this data and intelligence should enable you to paint a pretty good picture of the user.
As with most things this too has its limits. Data is inherently backward-looking. It will tell you what users have done and what they have liked, but extrapolating that into the future is a tricky exercise. Even talking to users about the future doesn’t help much since they are notoriously bad at predicting how their perspective will change when faced with new paradigms.
So treat your data as guidance and not gospel, and constantly update the guidance. Run experiments based on hypotheses derived from the historical data and challenge them with new data. If the experiment is sound and validates the hypothesis you can move forward with relative confidence.
When in doubt, trust the data.
Building things for fun is everyone’s dream and many teams succumb to this temptation. Some succeed; most fail. Considering return on investment (ROI) can help avoid this trap. Teams that are ROI focussed ask themselves how the R&D investment will be paid back and, hopefully, also show that the payback was realized. This desired result is a focus on those things that have the potential to matter most.
Maybe; there are pitfalls. Modeling ROI is not easy and the models themselves can be overly simple or (too often) complete crap. The inverse is also true – people can spend so much time on the modeling that any benefit to velocity is lost. It takes practice and time to find the right balance.
Some of the toughest ROI choices involve comparing features against non-functional requirements (NFRs) like stability, performance and technical debt. An easy solution is to not beat your head against this “apples to oranges” problem; instead, give each team a fixed “time budget” for managing technical debt and investing in the architecture runway. This will create some push-back in the short term (especially among product owners who want more capacity for features), but in the long term everyone will appreciate the increased velocity you’ll realize from making regular investments. At Bonial we ask teams to allocate roughly 40% of their capacity to rapid response, technical debt reduction and architecture runway development. That may seem like a lot, but if it makes the other 60% 7x faster, everyone wins.
In the end, treat ROI as a guideline. I think you’ll find that the simple act of asking people to think in these terms will elevate the conversations and make some tough decisions easier.
The more people that know your business, the better. Your engineers, testers, data scientists, operations specialists, designers each make dozens or hundreds of decisions a day, small and large, that affect the business. Most of these decisions will require them to extrapolate details from the general guidance. If they don’t understand the business, or more specifically, the “why” of the guidance, then there’s a good chance they’ll miss the mark on the details.
So take the time to explain the “why” of decisions. Educate your people on business fundamentals. Share numbers. Answer their questions. And, most important, be honest even if there’s bad news to share. Its better that they are armed with difficult facts than confused with half-truths and spin. You’ll be surprised at how many people will respond positively to the respect you show them by being honest.
Some companies work under a model in which engineering is expected to meekly follow orders from whoever is driving the product strategy. This is foolish to the point of being reckless. Some of the smartest people and most analytical thinkers in your company are in the R&D organization. Why cut that collective IQ out of the equation?
Smart companies involve the engineering teams in ideation as well as implementation. The best companies go one step further – they give engineering implicit control over what they build. Product manager or other stakeholders have to convince engineering of their idea; there is no dictatorial power.
Some may fear that this leads to a situation where the product authority becomes powerless or marginalized. While I’ve seen a number of product teams that were largely side-lined, it was never because they weren’t given enough authority – it was because they didn’t establish themselves as relevant. Good, competent product managers need to win over the engineers and stakeholders with demonstrated competence.
At Bonial, the product team has the responsibility for prioritizing the backlog but engineering team has the responsibility for committing to and delivering the work. This split gives a subtle but implicit veto to the engineering team. Most of the time the teams are in sync, but at teams the engineers call “bullshit” and refuse to accept work – usually due to an unclear ROI or clear conflict with stated goals. This results in some short-term tension but over the long-term this leads to healthy relationships between capable product managers and engaged engineering teams.
People who Think Right
My mentor used to say that, “some people think right, and some don’t.” What he meant was that some people have a knack for juggling ambiguity; when faced with a number of possible choice, they are more likely than not to pick one of the better choices. People who ”think right” thrive in a leader-leader environment; people who don’t are dangerous.
Why? Because after all the data has been collected, all of the models have been built and all of the (unbiased) input has been collected, decisions still need to be made. More often than not there will be several options on the table. Certainty will be elusive. In the end there’s an individual making a choice using all of the analytic, intuitive, conscious and sub-conscious tools available to them. Make consistently right decisions and you have a fair shot at success. Make consistently wrong decisions and you’ll likely fail.
Some people are far better at making the right decisions. These are the people you want in key roles.
The trick is how to best screen for these people. At Bonial we use open-ended cases studies and other “demonstrations of thought and work” during the recruiting process to get a glimpse on how people think. We’ve found this to be very effective at screening out clear mismatches, but a short, artificial session can only go so far. After that it’s a matter of observation during trial periods and, eventually, selection for fitness through promotions.
“Doing the right things” is an expansive topic. This article just scratches the surface; I could probably write a book on this topic alone. Once you have the basics of SDLC execution in place – good people, agile processes, devops, architectural runway, etc. – the main lever you’ll have to drive real business value is in doing the right things. Unfortunately this is much, much tougher than doing things right. It very quickly gets into the messy realm of egos, politics, control, tribalism and the like. But it can’t be avoided if you want to take your team to the next level.
- It’s not enough to “do things right” – you have to also “do the right things” if you don’t just want to build the wrong things faster
- Use data and a consider ROI to guide your decisions
- Put people who have context and ”think right” in charge of key decisions
- Engage the whole team and create checks and balances so bad ideas can’t be ramrodded through the process