Accelerated Velocity: Doing the Right Things

So far in this series I’ve shared thoughts on how to do things right – how to leverage best practices and develop skilled practitioners to get excellent results.  Doing things right, however, doesn’t mean you’re doing the right things; it could mean you’re just doing the wrong things much faster.

The hard truth is that doing things right is easier than the doing the right things.  The path to the former takes hard work but is relatively clear and straightforward.  The path to doing the right things is considerably more opaque and mysterious.  Just compare the number of books and blogs describing how to build software vs what to build with software to see the impressive gap between the two.

I’ve spent most of my career working to do both.  My primary responsibilities as an engineering leader have been to ensure the team is working effectively and efficiently.  But in my various executive and consulting roles I’ve had both the opportunity and obligation to be a thought leader in the areas of business, product and platform strategy.  Through these roles I’ve developed a deep respect for the challenges and upsides of choosing the right path.  I’ve also learned that an engineering leader who isn’t concerned with the question of “are we working on the right things” is doing their team a huge disservice.

There isn’t a formula or cookbook I’ve discovered that guarantee success, but I’ve found there are several ingredients which radically improve your chances of doing the right things as an engineering team.  We do all of these – some better than others – at Bonial.

Data / Situational Awareness

You can’t make good decisions about where to invest if you don’t know what’s going on with your systems or your users.  In a previous article I discussed at length why this is critical and how Bonial developed situational awareness around system performance and stability. 

Its just as important to know your users.  Note that I didn’t say, “know what you’re users are doing”.  That’s easy and only tells part of the story.  What you really want to know is “why” they are doing what they’re doing and, if possible, what they ”want” to do in the future.  That’s tough and requires a multi-faceted approach.

For this you’ll want both objective and subjective data to create a complete picture.  Objective data will come from event tracking and visualization (e.g. Google Analytics or home-grown data platforms like the Kraken at Bonial).  Subjective data will come from usability studies, user interviews, app reviews, etc.  Combined, this data and intelligence should enable you to paint a pretty good picture of the user.

As with most things this too has its limits.  Data is inherently backward-looking.  It will tell you what users have done and what they have liked, but extrapolating that into the future is a tricky exercise.  Even talking to users about the future doesn’t help much since they are notoriously bad at predicting how their perspective will change when faced with new paradigms.   

So treat your data as guidance and not gospel, and constantly update the guidance.  Run experiments based on hypotheses derived from the historical data and challenge them with new data.  If the experiment is sound and validates the hypothesis you can move forward with relative confidence. 

When in doubt, trust the data. 

ROI Focus

Building things for fun is everyone’s dream and many teams succumb to this temptation.  Some succeed; most fail.  Considering return on investment (ROI) can help avoid this trap.   Teams that are ROI focussed ask themselves how the R&D investment will be paid back and, hopefully, also show that the payback was realized.  This desired result is a focus on those things that have the potential to matter most. 

Great, right?

Maybe; there are pitfalls.  Modeling ROI is not easy and the models themselves can be overly simple or (too often) complete crap.  The inverse is also true – people can spend so much time on the modeling that any benefit to velocity is lost.  It takes practice and time to find the right balance.

Some of the toughest ROI choices involve comparing features against non-functional requirements (NFRs) like stability, performance and technical debt.   An easy solution is to not beat your head against this “apples to oranges” problem; instead, give each team a fixed “time budget” for managing technical debt and investing in the architecture runway.  This will create some push-back in the short term (especially among product owners who want more capacity for features), but in the long term everyone will appreciate the increased velocity you’ll realize from making regular investments.  At Bonial we ask teams to allocate roughly 40% of their capacity to rapid response, technical debt reduction and architecture runway development.  That may seem like a lot, but if it makes the other 60% 7x faster, everyone wins.

In the end, treat ROI as a guideline.  I think you’ll find that the simple act of asking people to think in these terms will elevate the conversations and make some tough decisions easier.

Context

The more people that know your business, the better.  Your engineers, testers, data scientists, operations specialists, designers each make dozens or hundreds of decisions a day, small and large, that affect the business.  Most of these decisions will require them to extrapolate details from the general guidance.  If they don’t understand the business, or more specifically, the “why” of the guidance, then there’s a good chance they’ll miss the mark on the details. 

So take the time to explain the “why” of decisions.  Educate your people on business fundamentals.  Share numbers.  Answer their questions.  And, most important, be honest even if there’s bad news to share.  Its better that they are armed with difficult facts than confused with half-truths and spin.  You’ll be surprised at how many people will respond positively to the respect you show them by being honest.

Calling Bullshit

Some companies work under a model in which engineering is expected to meekly follow orders from whoever is driving the product strategy.  This is foolish to the point of being reckless.  Some of the smartest people and most analytical thinkers in your company are in the R&D organization.  Why cut that collective IQ out of the equation?

Smart companies involve the engineering teams in ideation as well as implementation.  The best companies go one step further – they give engineering implicit control over what they build.  Product manager or other stakeholders have to convince engineering of their idea; there is no dictatorial power.   

Some may fear that this leads to a situation where the product authority becomes powerless or marginalized.  While I’ve seen a number of product teams that were largely side-lined, it was never because they weren’t given enough authority – it was because they didn’t establish themselves as relevant.  Good, competent product managers need to win over the engineers and stakeholders with demonstrated competence. 

At Bonial, the product team has the responsibility for prioritizing the backlog but engineering team has the responsibility for committing to and delivering the work.  This split gives a subtle but implicit veto to the engineering team.  Most of the time the teams are in sync, but at teams the engineers call “bullshit” and refuse to accept work – usually due to an unclear ROI or clear conflict with stated goals.  This results in some short-term tension but over the long-term this leads to healthy relationships between capable product managers and engaged engineering teams.

People who Think Right

My mentor used to say that, “some people think right, and some don’t.”  What he meant was that some people have a knack for juggling ambiguity; when faced with a number of possible choice, they are more likely than not to pick one of the better choices.  People who ”think right” thrive in a leader-leader environment; people who don’t are dangerous. 

Why?  Because after all the data has been collected, all of the models have been built and all of the (unbiased) input has been collected, decisions still need to be made.  More often than not there will be several options on the table.  Certainty will be elusive.  In the end there’s an individual making a choice using all of the analytic, intuitive, conscious and sub-conscious tools available to them.  Make consistently right decisions and you have a fair shot at success.  Make consistently wrong decisions and you’ll likely fail.

Some people are far better at making the right decisions.  These are the people you want in key roles.

The trick is how to best screen for these people.  At Bonial we use open-ended cases studies and other “demonstrations of thought and work” during the recruiting process to get a glimpse on how people think.  We’ve found this to be very effective at screening out clear mismatches, but a short, artificial session can only go so far.  After that it’s a matter of observation during trial periods and, eventually, selection for fitness through promotions. 

Closing Thoughts

“Doing the right things” is an expansive topic.  This article just scratches the surface; I could probably write a book on this topic alone.  Once you have the basics of SDLC execution in place – good people, agile processes, devops, architectural runway, etc. – the main lever you’ll have to drive real business value is in doing the right things.  Unfortunately this is much, much tougher than doing things right.  It very quickly gets into the messy realm of egos, politics, control, tribalism and the like.  But it can’t be avoided if you want to take your team to the next level. 

Good luck.

  • It’s not enough to “do things right” – you have to also “do the right things” if you don’t just want to build the wrong things faster
  • Use data and a consider ROI to guide your decisions
  • Put people who have context and ”think right” in charge of key decisions
  • Engage the whole team and create checks and balances so bad ideas can’t be ramrodded through the process

Accelerated Velocity: Getting Uncomfortable

“Confident.  Cocky.  Lazy.  Dead.” This admonishment against complacency was the mantra of Johnny “Dread” Wulguru, the villain in Tad William’s Otherland saga.  As true as it is for assassins like Dread, it’s also true (though perhaps not as literally lethal) for teams and companies that choose to rest on their laurels and stop challenging themselves.

Complacency is the enemy of innovation.  This has been proven over and over throughout history in every domain as once successful or dominant players suddenly found themselves lagging behind.  This is also a leadership failure.  Good leaders strive to prevent comfort from becoming complacency. 

Jeff Bezos at Amazon has baked this into the DNA of the company as enshrined in his “Day 1” message to shareholders.  Of note is what Mr. Bezos says happens when companies get comfortable on Day 2:

Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”

Confident.  Cocky.  Lazy.  Dead.

It’s not easy, though.  Comfort is the reward for success after all.  “Don’t fix something that ain’t broke.”  Right?

Wrong.  The reward for success is being in position to hustle and build on that success.  Period. 

Getting Uncomfortable

It’s no different in software development teams. 

By 2016 we’d made significant strides in velocity and efficiency in Bonial product development.  Processes were in place.  An architecture roadmap existed.  Teams were healthy.  The monolith was (mostly) broken.  AWS was being adopted.  All signs pointed to very successful changes having taken place. 

At the same time I felt a certain sense of complacency settling in.  The dramatic improvements over the past couple of years had some thinking that it was now “good enough.”  Yet we still had projects that ran off the rails and took far longer than they should have.  We still couldn’t embrace the idea of an MVP.  We still had mindsets that change was dangerous and scary.  And, perhaps most important, many had a belief that we were as fast as we could or needed to be. 

Yes, we were better and faster, but I knew we had only begun to tap our potential.  It was time to get uncomfortable.

Engineering Change

Changing a deep-seated mindset in a large organization using a head-on approach is tough.  An easier and often more effective approach is to engineer and successfully demonstrate change in smaller sub-groups and spread out from there.  Once people see what is possible or, better yet, experience it themselves, they tend to be quite open to change. 

So we looked for opportunities to challenge individual teams to “think different”.  For example, on several occasions the company needed an important feature insanely fast.  Rather than say no, we asked teams to work in “hackathon mode” – essentially, to do whatever it took to get something to market in a few days even if the final solution was wrapped in duct tape and hooked to life support.  Not surprisingly we usually delivered and the business benefitted massively.  Yes, we then had to spend time refactoring and hardening to make the solution really stable, but the feature was in the market, business was reaping the benefits and the teams were proud of delivering fast.

On another occasion we had a team that struggled with velocity due, in part, to lack of test automation and an over-reliance on manual testing.  So I challenged the team to deploy the next big feature with zero manual testing; they had to go to production with only automated tests.  This made them very uncomfortable.  I told them I had their backs if it didn’t work out – they only needed to give it their best shot.  To their credit, the team signed up for the challenge and the release went out on time and had no production bugs.  This dramatic success made a strong statement to the rest of the organization.

Paradigm Shift

We also took advantage of our new app ecosystem.  Over the past few years the company has started several new “incubation” initiatives to explore new possibilities and expand our product portfolio.  We didn’t want to do these initiatives with our core product development teams because (a) we didn’t want to be continually wrestling with questions as to whether to focus on the new or old products, and (b) we feared that doing things like the “core” teams would be too slow. 

So we spun out standalone teams with all of the resources needed to operate independently.  Not surprisingly these “startup” teams moved much faster than any of our core teams.  In part this was because they were not burdened by legacy systems, technical debt, and risk/exposure of making mistakes that affect millions of users. But I think the bigger part was sheer necessity.  We ran our incubation projects like mini startups – they received funding, a target and a timeline and they had to hit those targets (or at least show significant progress) in order to receive more funding.  As a result, the teams were intensely focussed on delivering MVPs as quickly as possible, measuring the results in the market, and pivoting if needed. 

Between 2015 and 2017 we ran three major incubation projects and each one was faster than the last.  The most recent, Fashional, went from funding to launch in less than 12 weeks, which included ramping up development teams in two countries, building web and native mobile apps and lining up initial partners and marketing launch events.  This proven ability to move fast made a strong statement to our other teams. 

We soon had “core” teams making adjustments and shifting their thinking.  Over the next few quarters, every team had embraced a highly iterative, minimalistic approach to delivery that enabled us to try more things more quickly and, when needed, take more aggressive risks.   Now each team strives to deliver demonstrable, user-facing value every sprint.  Real value, not abstract progress.  Just like the agile book says.  This isn’t easy but is fundamentally required to drive minimalistic, iterative thinking.  The result is dramatic improvement in velocity while having more fun (success is fun). 

For sure this hasn’t been perfect.  Even today we still have teams that struggle to plan and deliver iteratively and we still have projects that take way too long.  On the flip side we have a much deeper culture of challenging ourselves, getting uncomfortable and continually improving. 

Confident.

Closing Thoughts

  • It’s easy to become complacent, especially after a period of success.  This is deadly.
  • Leaders must act to remove complacency and force themselves and their teams to “get uncomfortable” and push their own limits.
  • Break the problem into smaller chunks.  Work with entrepreneurial teams on initiatives that challenge the status quo.  Have them show the way.
  • Reward and celebrate success and make sure you have the team’s back.  Honor your commitments.

Accelerated Velocity: Clarifying Processes and Key Roles

In a previous article I argued that great people are needed in order to get great results. To be clear, this theorem is asymmetric: great people don’t guarantee great results. Far from it – the history of sports, business and military is littered with the carcasses of “dream teams” that miserably underperformed.

No, there are several factors that need to be in place for teams to excel. The ability to take independent action, discussed in the previous article, is one of those factors. I’ll discuss others over the next few articles, starting here with clarity around processes and roles.

Even the best people have trouble reaching full potential if they don’t know what’s expected of them. True, some people are capable of jumping in and defining their own roles, but this is rare. Most will become increasingly frustrated, not knowing on any given day what’s expected of them and what they need to do to succeed.

People in teams also need to understand the conventions for how best to work with others. How they plan, collaborate, communicate status, and manage issues all play a part in defining how effective the team is. Too much, too little, or too wrong, and a high potential team will find itself hobbled.

The same applies to teams and teams-of-teams. Teams need clarity about their role within the larger organization. They also need common processes to facilitate working together in pursuit of common goals.

Popular software development methodologies provide the foundation for role and process clarity, with the “agile” family of methodologies being the de-facto norm. These frameworks typically come with default role definitions (e.g. scrum master, product owner) as well as best practices around processes and communications. When applied correctly they can be powerful force multipliers for teams, but adopting agile is not a trivial exercise.  In addition, these frameworks only cover a portion of the clarity that’s needed.

Bonial’s Evolution

Bonial in 2014 was maturing as an agile development shop, but there were gaps in role definitions, team processes, and inter-team collaboration that suppressed the team’s potential. Fortunately Bonial has always had an abundance of kaizen – restlessness and a desire to always improve – so people were hungry to change. No-one was particularly happy with the status quo and there was a high willingness to invest in making things better.

We rolled up our sleeves and got started…

We attacked this challenge along multiple vectors. First, we needed a process methodology that would not only guide teams but also provide tools for inter-team coordination and portfolio management. The product and engineering leadership teams chose the Scaled Agile Framework (SAFe) as as over-arching team, program and portfolio management methodology. It was not the perfect framework for Bonial but it was good enough to start with and addressed many of the most pressing challenges.

Second, we spent time more clearly defining the various agile roles and moving responsibilities to the right people. We started with the very basics as broken down in the following table:

Area of Responsibility Role Name (Stream / Team) Notes
What? Product Manager, Product Owner  Ensures the the team is “Doing the right things”
Who and When? Engineering Manager, Team Lead  Ensures that the team is healthy and “Doing things right” while minimizing time to market
How? Architect, Lead Developer  Ensures that the team has architectural context and runway and is managing tech debt  

We created general role definitions for each position, purposely leaving space and flexibility for the people and teams to adapt as appropriate.  (I know many agile purists will feel their blood pressure going up after reading the table above, but I’m not a purist and this simplicity was effective in getting things started.) 

A quick side note here. One of the unintended consequences of any role definition is that they tend to create boxes around people. They become contracts where responsibilities not explicitly included are forbidden and the listed responsibilities become precious territory to guard and protect. I hate this, so I emphasized strongly that (a) role definitions are guidelines, not hard rules, and (b) the responsibility for mission success lies with the entire team, so it’s ok to be flexible so long as everything gets done.

Third, we augmented the team. We hired an experienced SAFe practitioner to lead our core value streams, organize and conduct training at all levels, and consult on best practices from team level scrum to enterprise level portfolio management. This was crucial; the classroom is a great place to get started, but it’s the day-to-day practice and reinforcement that makes you a pro.

Finally, we placed a lot of emphasis on retrospectives and flexibility. We learned and continually improved. We tried things, keeping those that succeeded and dropping those that failed. Over time, we evolved a methodology and framework that fit our size, culture and mission, eventually driving the massive increases in velocity and productivity that we see today.

Team Leads

There was one more big role definition gap that was causing a lot of confusion and that we needed to close: who takes care of the teams? While agile methodologies do a good job of defining the roles needed to get agile projects done, they don’t define roles needed to grow and run a healthy organization. For example, scrum has little to say regarding who hires, nurtures, mentors, and otherwise manages teams. Those functions are critical and need a clear home.

In Bonial engineering, we put these responsibilities on the “team lead” role. This role remains one of the most challenging, important and rewarding roles in Bonial’s engineering organization and includes the following responsibilities:

  • People
    • Recruiting
    • Personal development
    • Compensation administration
    • Morale and welfare
    • General management (e.g. vacation approvals)
    • Mentoring, counseling and, if needed, firing
  • Process
    • Effective lean practices
    • Efficient horizontal and vertical communications
    • Close collaboration with product owner (PO)
  • Technology
    • Architectural fitness (with support from the architecture team)
    • Operational SLAs and support (e.g. “On call”)
    • “Leading by example” – rolling up sleeves and helping out when appropriate
  • Delivery
    • Accountable for meeting OKRs
    • Responsible for efficient spend and cost tracking

That’s an imposing list of responsibilities, especially for a first-time manager. We’d be fools to thrust someone into this role with no support, so we start with an apprenticeship program – where possible, first time leads shadow a more experienced lead for several months, only taking on individual responsibilities when they’re ready. We also train new leads in the fundamentals of leadership, management and agile, and each lead has active and engaged support from their manager and HR. Finally, we give them room to both succeed, fail and learn.

So far this model has worked well. People tend to be nervous when first stepping into the role, but over time become more comfortable and thrive in their new responsibilities. The teams also appreciate this model. In fact, one of the downsides has been that it’s difficult to recruit into this role since it contains elements of traditional scrum master, team manager and engineering expert – a combination that is rare in the market. As such, we almost always promote into the role.

Closing Thoughts

In the end we know that no one methodology (or even a mashup of methodologies) will satisfy every contingency. To that end there are two important principles underpinning how we operate: flexibility and ownership. If something needs to be done, do it. Its great if the person who is assigned a given role does a full and perfect job, but in the end success is everyone’s responsibility, so it’s not an excuse if they can’t or won’t do it.

Some closing thoughts:
• People need to understand their roles and the expectations put on them to be most effective.
• Teams need to have a unifying process to facilitate collaboration and avoid chaos and waste.
• The overarching goal is team success; all members of the team should have that as their core role description.
• Flexibility is key. Methodologies are a means to an end, not the ends themselves.

Accelerated Velocity: Enabling Independant Action

Inefficiency drives me crazy.  Its like fingernails on a chalkboard.  When I’m the victim of an inefficient process, I can’t help but stew on the opportunity costs and become increasingly annoyed.  This sadly means I’m quite often annoyed since inefficiency seems to be the natural rest state for most processes.

There are lots of reasons why inefficiency is the norm, but in general they fall into one of the following categories:

1) Poor process design

2) Poor process execution

3) Entropy and chance

4) External dependencies

The good news in software development is that Lean/agile best practices and reference implementations cover process design (#1).  Process execution (#2) can likewise be helped by hiring great people and following agile best practices.  Entropy (#3) can’t, by definition, be eliminated but the effects can be mitigated by addressing the others effectively.

Which leaves us with the bane of efficient processes and operations: dependencies (#4). 

Simply put, a dependency is anything that needs to happen outside of the process/project in question in order for the process/project to proceed or complete.  For example, a software project team may require an API from another team before it can finish its feature.  Likewise a release may require certification by an external QA team before going to production.  In both cases, the external dependency is the point where the process will likely get stuck or become a bottleneck, often with ripple effects extending throughout the system.  The more dependencies, the more chances for disruption and delay.

So how does one reduce the impact of dependencies?

The simplest way is to remove the dependencies altogether.  Start by forming teams that are self-contained, aligned behind the same mission, and ideally report to the same overall boss.  Take, for example, the age-old divisions between product, development, QA, and operations.  If these four groups report to different managers with different agendas, then the only reasonable outcome will be pain.  So make it go away!  Put them all on the same team. Get them focussed on the same goals.  Give them all a stake in the overall success.

Second, distribute decision making and control.  Any central governance committee will be a chokepoint, and should only exist when (a) having a chokepoint is the goal, or (b) when the stakes are so high that there are literally no other options.  Otherwise push decision-making into the teams so that there is no wait time for decisions.  Senior management should provide overall strategic guidance and the teams should make tactical decisions.  (SAFe describes it well here.)

In 2014, Bonial carried a heavy burden of technical and organization dependencies and the result was near gridlock. 

At the time, engineering was divided into five teams (four development teams and one ops team), and each team had integrated QA and supporting ops.  So far, so good.  Unfortunately, the chokepoints in governance and the technical restrictions imposed by a shared, monolithic code-base effectively minimized independent action for most of the teams, resulting in one, large, inter-connected mega-team.

There was a mechanism known as “the roadmap committee” which was nominally responsible for product governance, but in practice it had little to do with roadmap and more to do with selective project oversight.  One of the roadmap committee policies held that nothing larger than a couple of days was technically allowed to be done without a blessing from this committee, so even relatively minor items languished in queues waiting for upcoming committee meetings.   

What little did make it through the committee ran directly into the buzzsaw of the monolith.  Nearly all Bonial software logic was embedded in a single large executable called “Portal3”.  Every change to the monolith had to be coordinated with every other team to ensure no breakage.  Every release required a full regression test of every enterprise system, even for small changes was on isolated components.   This resulted in a 3-4 day “release war-room” every two weeks that tied down both ops and the team unfortunate enough to be on duty. 

It was painful.  It was slow.  Everyone hated it.

We started where we had to – on the monolith.  Efforts had been underway for a year or more to gradually move functionality off of the beast, but it became increasingly clear with each passing quarter that the “slow and steady” approach was not going to bear fruit in a timeframe relevant to mere mortals. So our lead architect, Al, and I decided on a brute force approach: we assembled a crack team which took a chainsaw to the codebase, broke it up into reasonably sized components, and then put each component back together. Hats off to the team that executed this project – wading through a spaghetti of code dependencies with the added burden of Grails was no pleasant task.  But in a few months they were done and the benefits were felt immediately.

The breakup of the monolith enabled the different teams to release independently, so we dropped the “integrated release” process and each team tested and released on their own.  The first couple of rounds were rough but we quickly hit our stride.  Overall velocity immediately improved upon removing the massive waste of the dependent codebase and labor-intensive releases.

The breakup of the monolith also untethered the various team roadmaps, so around this time we aligned teams fully behind discreet areas of the business (“value streams” in SAFe parlance). We pushed decision making into the teams/streams, which became largely responsible for the execution of their roadmap with guidance from the executive team.  The “roadmap committee” was disbanded and strategic planning was intensified around the quarterly planning cycle.   It was, and still is, during the planning days each quarter that we identify, review and try to mitigate the major dependencies  between teams.  This visibility and awareness across all teams of the dependency risk is critical to managing the roadmap effectively.

Eventually we tried to take it to the next level – integrating online marketing and other go-to-market functions into vertically aligned product teams – but that didn’t go so well.  I’ll save that story for another day.

The breakup of the monolith and distribution of control probably had the biggest positive impact in unleashing the latent velocity of the teams.  The progress was visible.  As each quarter went by, I marveled at how much initiative the teams were showing and how this translated into increased motivation and velocity. 

To be sure, there were bumps and bruises along the way.  Some product and engineering leaders stepped up and some struggled.  Some teams adapted quickly and some resisted.  Several people left the team in part because this setup required far more initiative and ownership than they were comfortable with.  But in fairly short order this became the norm and our teams and leaders today would probably riot if I suggested going back to the old way of doing things.

Some closing thoughts:

  • Organize teams for self-sufficiency and minimal skill dependencies
  • Minimize or eliminate monoliths and shared ownership
  • Keep the interface as simple, generic and flexible as possible when implementing shared systems (e.g. APIs or backend business systems) 
  • Build transparent about dependencies and manage them closely

Accelerated Velocity: Growth Path

I recently heard that the average tenure engineers in tech companies is less than two years.  If true, it’s a mind-boggling critique on the tech industry.  What’s wrong with companies that can’t retain people for more than a year or two?  Seriously – who wants to work for a team where people aren’t around long enough to banter about the second season of Westworld?

I know there are many factors in play, especially in hot tech markets, but there’s one totally avoidable fault that is all too common: being stupid with growth opportunities. 

Software engineering is one of those fields where skills often increase exponentially with time, especially early in a career.  Unfortunately businesses seem loath to account for this growth in terms of new opportunities or increased compensation.   For example, companies set salaries at the time of hire and this is what the employee is stuck with for their tenure at the company – with the exception perhaps of an annual cost of living increase.  At the same time, the employee is gaining experience, adding to their skills portfolio, and generally compounding their market value.  Within a year or two the gap between their new market value and their actual compensation has grown quite large.  As most business shudder at the idea of giving large raises on a percentage basis, the gap continues to grow and the employee eventually makes the rational decisions to move to another company that will recognize their new market value, leaving the original company with en expensive gap in their workforce and massive loss in knowledge capital.

In addition, many companies take a highly individualist approach to compensation with a goal of getting maximum talent for the lowest price.  While this is textbook MBA, it fails in practice simply because it doesn’t take into account human psychology around relative inequality: when people feel they are not being treated fairly they get demotivated.  This purely free-market approach leads to a situation in which people doing the same work have massive disparities in compensation simply because some people are better negotiators than others.  The facts will eventually get out, leaving the person on the low end bitter and both people feeling like they can’t trust their own company.  This is a failing strategy in the long term.

This is what I’ve seen at most companies I’ve been in or around, and this was essentially the situation at Bonial in 2014.  There was a very high variance in compensation – on the extreme end we had a cases in which developers were being paid half the salary of other developers on the same team despite similar experience and skills.  Salaries were also static – the contract salary didn’t change unless the employee mustered the courage to renegotiate the contract.  The negotiation sessions themselves were no treat for either the employee or their manager – in the absence of any framework they were essentially contests of wills, generally leaving both parties unsatisfied.

So we set out to develop a system that would facilitate a career path and maintain relative fairness across the organization.  We modeled it on a framework I’d developed previously which can be visualized as follows:

Basically, as a person gains experience (heading from bottom-left to top-right) they earn the chance to be promoted, which comes with higher compensation but also higher expectations.  They can also explore both technical specialist and management tracks as they become more senior, and even move back and forth between them.

The hallmarks of this system are:

    1. Systematic: Compensation is guided by domain skills – actual contributions to the business and market value – not on negotiation skills. 
    2. Fair: People at the same career/skill level will be compensated similarly.
    3. Regular: Conversation about career level and compensation happens at least once per year, initiated by the company. 
    4. Motivational: People have an understanding of what they need to demonstrate to be promoted. 
    5. Flexible: People have three avenues for increased compensation:
      • Raises – modest boosts in compensation for growth within their current career level based on solid performance.  This happens in between promotions.
      • Promotions – increases to compensation based on an employee qualifying for the next career level (with increased expectations and responsibilities).  This is where the big increases are and what everyone should be striving for.
      • Market increases – increases due to adjustment of the entire salary band based on an evaluation of the general market.

From a management perspective, this system also has some additional upsides:

  • Easy to budget.  Instead of planning with names and specific salaries, one can build a budget based on headcount of certain skill/levels. 
  • Easy to adjust.  If the team decides it needs a mobile developer or a test automator instead of a backend developer, for example, it simply trades one of it’s authorized positions for one of a similar value.  Likewise it can shift around seniority as needed to meet its goals.
  • Mechanism for feedback.  By reserving promotions and raises for the deserving contributors, this system provides an implicit feedback mechanism.

So far the system seems to be working well at Bonial, measured as much by what isn’t happening as what is.  For example, people who have left the team seldom call out compensation as their primary motivator.  We’ve also had few complaints about people feeling they are not being paid fairly compared to their peers.  

As a side note, we conduct regular employee satisfaction surveys and ask how employees feel about their compensation.  Interestingly, their responses on their feeling about compensation vs market do not strongly correlate with their overall satisfaction.  What does correlate?  Their projects, the tech they work with, their growth opportunities, the competence of their team mates, and their leads.  So these are the areas we have and will continue to invest in.

Some closing thoughts:

  • Professionals want to know they are being compensated fairly both within the company and within the market.  That way they can focus on what they’re creating, not be worried about their pay.
  • Professionals want the opportunity to grow and to be recognized (and rewarded) for their growth.  Providing a growth path inside the company improves employee retention and reduces costs related to talent flight.
  • Compensation is an asymmetric demotivator.  Low or unfair compensation will demotivate, but overly high compensation isn’t generally a motivator.  So make sure you’re out of the “demotivating” range and then focus on key motivators, especially in the area of day-to-day satisfaction.

Accelerated Velocity: Situational Awareness

“If a product or system chokes and it’s not being monitored, will anyone notice?”  Unlike the classic thought experiment, this tech version has a clear answer: yes.  Users will notice, customers will notice, and eventually your whole business will notice. 

No-one wants their first sign of trouble to be customer complaints or a downturn in the business, so smart teams invest in developing “situational awareness.” What’s that?  Simple – situation awareness is the result of having access to the tools, data and information needed to understand and act on all of the moving factors relating to the “situation.”  This term is often used in the context of crisis situations or other fast-paced, high-risk endeavors, but it applies to business and network operations as well.

Product development teams most definitely need situational awareness.  The product managers and development leads need to know what their users are doing and how their systems are performing in order to make wise decisions – for example, should the next iteration focus on features, scale or stability.  Sadly, these same product teams often see the tracking and monitoring that is needed for developing situational awareness as “nice-to-have’s” or something to be added when the mythical “someday” arrives. 

The result?  Users having good or bad experiences and no-one knowing either way.  Product strategy decisions being made on individual bias, intuition and incomplete snippets of information.  Not good.

Sun Tzu put it succinctly:

“If you know neither the enemy nor yourself, you will succumb in every battle.”

Situational awareness is a huge topic, so in this series I’m going to limit my focus to data collection (tracking and monitoring) and insights (analytics and visualization) at the product team level.  For the purposes of this series I’ll define ”tracking” as the data and tools that show what users/customers are doing and “monitoring” as the data and tools that focus on systems stability are performance.  Likewise I’ll use “analytics” to refer to tools that facilitate the conversion of data into usable intelligence and “visualization” as the tools for making that intelligence available to the right people at the right time.  I’ll cover monitoring in this article and tracking in a later article.

At Bonial in 2014 there was a feeling that things were fine – the software systems seemed to be reasonably stable and the users appeared happy.  Revenue was strong and the few broad indicators viewed by management seemed healthy.  Why worry?   

From a system stability and product evolution perspective it turns out there was plenty of reason to worry.  While some system-level monitoring was in place, there was little visibility into application performance, product availability or user experience.  Likewise our behavioral tracking was essentially limited to billing events and aggregated results in Google Analytics.  Perhaps most concerning: one of the primary metrics we had for feature success or failure was app store ratings.  Hmmm.

I wasn’t comfortable with this state of affairs.  I decided to start improving situational awareness around system health so I worked with Julius, our head of operations, to lay out a plan of attack.  We already had Icinga running at the system level as well as DataDog and Site24x7 running on a few applications – but they didn’t consistently answer the most fundamental question: “are our users having a good experience?” 

So we took some simple steps like adding new data collectors at critical points in the application stack.  Since full situation awareness requires that the insights be available to the right people at the right time, we also installed large screens around the office that showed a realtime stream of the most important metrics.  And then we looked at them (a surprisingly challenging final step). 

The Bonial NOC Monitor Wall
One of my “go to” overviews of critical APIs, showing two significant problems during the previous day.

The initial results weren’t pretty.  With additional visibility we discovered that the system was experiencing frequent degradations and outages.  In addition, we were regularly killing our own systems by overloading them with massive online marketing campaigns (for which we coined the term: “Self Denial of Service” or SDoS).  Our users were definitely not having the experience we wanted to provide.

(A funny side note: with the advent of monitoring and transparency, people started to ask: “why has the system become so unstable?”)

We had no choice but to respond aggressively.  We set up more effective alerting schemes as well as processes for handling alerts and dealing with outages.  Over time, we essentially set up a network operations center (NOC) with the primary responsibility of monitoring the systems and responding immediately to issues.  Though exhausting for those in the NOC (thank you), it was incredibly effective.  Eventually we transferred responsibility for incident detection and response to the teams (“you build it you run it”) who then carried the torch forward.

Over the better part of the next year we invested enormous effort into triaging the immediate issues and then making design and architecture changes to fix the underlying problems.  This was very expensive as we tapped our best engineers for this mission.  But over time daily issues became weekly became monthly.  Disruptions became less frequent and planning could be done with reasonable confidence as to the availability of engineers.  Monitoring shifted from being an early warning system to a tool for continuous improvement. 

As the year went on the stable system freed up our engineers to work on new capabilities instead of responding to outages.  This in turn became a massive contributor to our accelerated velocity.  Subsequent years were much the same – with continued investment in both awareness and tools for response, we confidently set and measure aggressive SLAs.  Our regular investment in this area massively reduced disruption.  We would never have been able to get as fast as we are today had we not made this investment.

We’ve made a lot of progress in situational awareness around our systems, but we still have a long way to go.  Despite the painful journey we’ve taken, it boggles my mind that some of our teams still push monitoring and tracking down the priority list in favor of “going fast”.  And we still have blind spots in our monitoring and alerting that allow edge-case issues – some very painful – to remain undetected.  But we learn and get better every time.

Some closing thoughts:

  • Ensuring sufficient situational awareness must be your top priority.  Teams can’t fix problems that they don’t know about.
  • Monitoring is not an afterthought.  SLAs and associated monitoring should be a required non-functional requirement (NFR) for every feature and project.
  • Don’t allow pain to persist – if there’s a big problem, invest aggressively in fixing it now.  If you don’t you’ll just compound the problem and demoralize your team.
  • Lead by example.  Know the system better than anyone else on the team.

 

In case you’re interested, here are some of the workhorses of our monitoring suite:

 

Accelerated Velocity: Building Great Teams

Note: this article is part 3 of a series called Accelerated Velocity.  This part can be read stand-alone, but I recommend that you read the earlier parts so as to have the overall context.

People working in teams are at the heart of every company.  Great companies have great people working in high performing teams.  Companies without great people will find it very difficult to get exceptional results. 

The harsh reality is that there aren’t that many great people to go around.  This results in competition for top talent, which is especially true in tech.  Companies and organizations use diverse strategies in addressing this challenge.  Some use their considerable resources (e.g. cash) to buy top talent though with dubious results – think big corporations and Wall Street banks.  Some create environments that are very attractive to the type of people they’re looking for – think Google and Amazon.  Some purposely start with inexperienced but promising people and develop their own talent – a strategy used by the big consulting companies.  Many drop out of the race altogether and settle for average or worse (and then hire the consulting companies to try to solve their challenges with processes and technology – which is great for the consulting companies).

But attracting talent is only half the battle.  Companies that succeed in hiring solid performers then have to ensure their people are in a position to perform, and this brings us to their teams.  Teams have a massive amplifying affect on the quantity and quality of each individual’s output.  My gut tells me that the same person working on two different teams may be 2-3X as productive depending on the quality of the team. 

So no matter how good a company is at attracting top talent, it then needs to ensure that the talent operates in healthy teams. 

What is a healthy team?  From my experience it looks something like this:

  • Competent, motivated people who are…
  • Equipped to succeed and operate with…
  • High integrity and professionalism…
  • Aligned behind a mission / vision

That doesn’t seem too hard.  So why aren’t healthy teams the norm?  Simple: because they’re fragile.  If any of the above pieces are missing, the integrity of the team is at risk.  Throw in tolerance for low performers, arrogant assholes, and whiners, mix in some disrespect and fear, and the team is broken.

(Note that the negatives influences outweigh the positives – as the proverb says: “One bad apple spoils the whole bushel.”  If you play sports you know this phenomenon well – a team full of solid players can easily be undone by a single weak link that disrupts the integrity of the team.)

This leads me to a few basic rules I follow when developing teams:

  1. Provide solid leadership
  2. Recruit selectively
  3. Invest in growth and development
  4. Break down barriers to getting and keeping good people
  5. Aggressively address low-performance and disruption

Bonial had a young team with a wide range of skill and experience in 2014.  Fortunately many of the team members had a bounty of raw talent and were motivated (or desired to be motivated).  Unfortunately there were also quite a few under-performers as well as some highly negative and disruptive personalities in the mix.  The combination of inexperience, underperformance and disruption had an amplifying downward effect on the teams.

To build confidence and start accelerating performance we needed to turn this situation around.  We started by counseling and, if behavior didn’t change, letting go the most egregiously low performers and disruptive people – not an easy thing to do and somewhat frowned upon in both the company and in German culture.   But the cost of keeping them on the team, thereby neutralizing and demoralizing the high performers, was far higher than the pain and cost of letting them go. 

(A quick side note: there were concerns among the management that letting low-performers go would demoralize the rest of the team.  Not surprisingly, quite the opposite happened – the teams were relieved to have the burdens lifted and were encouraged to know that their leads were committed to building high performing teams.)

We started doing a better job of mentoring people and setting clear performance goals.  Many thrived with guidance and coaching; some didn’t and we often mutually decided to part ways.  Over time the culture changed to where low performance and negativity were no longer tolerated.

At the same time we invested heavily in recruiting.  We hired dedicated internal recruiters specifically focussed on tech recruits.  We overhauled our recruiting and interview process to better screen for the talent, mentality and personality we needed.  We added rigor to our senior hiring practices, focussing more on assessing what the person can do vs what they say they can do.  And we added structure to the six month “probation” period, placing and enforcing gates throughout the process to ensure we’d hired the right people.  Finally, we learned the hard way that settling for mediocre candidates was not the path to success; it was far better to leave a position unfilled than to fill it with the wrong person.

How did we attract great candidates?  We focussed on our strengths and on attracting people who valued those attributes: opportunities for growth, freedom to make a substantial impact, competent team-mates, camaraderie, a culture of respect, and exposure to cutting-edge technologies.  Why these?  Because year over year, though employee satisfaction survey and direct feedback, we find these elements correlate very strongly with employee satisfaction, even more so than compensation and other benefits.  In short, we’ve worked hard to create an environment where our team-mates are excited to come to work every day.

(This is not to say we ignored competitive compensation; as I’ll describe in a later post, we also worked to ensure we paid a fair market salary and then provided a path for increasing compensation over time with experience.)

Over time, as our people became more experienced, our processes matured and our technology set became more advanced, Bonial became a great place for tech professionals to sharpen their skills and hone their craft.  New team members brought fresh ideas and at the same time had an opportunity to learn both by what we already had as well as what they helped create.  The result is what we have today: a team of teams full of capable professionals who are together performing at a level many times higher than in 2014

Some closing thoughts:

  • You’re only as good as the people on the teams.
  • Nurture and grow talented people. Help under-performers to perform. Let people go when necessary.
  • Get really good at recruiting.  Focus on what the candidate will do for you vs what they claim to have done in the past.
  • Don’t fall into the trap of believing process and tools are a substitute for good people.

Footnote: If you haven’t yet, I suggest your read about Google’s insightful research on team performance and how  ”psychological safety” is critical to developing high performing teams. 

Accelerated Velocity: Building Leaders

Note: this article is part 2 of a series called Accelerated Velocity.  This part can be read stand-alone, but I recommend that you read the earlier parts so as to have the overall context.

Positive changes require a guiding hand.  Sometimes this arises organically from a group of like-minded people, but far more often there’s a motivated individual driving the change.  In short – a leader.

Here’s the rub: the tech industry is notoriously deficient in developing leaders. Too often the first step in a leader’s journey starts when their manager leaves and they’re blessed with a dubious promotion… “Congratulations, you’rein charge now.  Good luck.”  If they’re fortunate their new boss is an experienced leader and has time to mentor them.  In a larger organization they may have access to some bland corporate training on how to host a meeting or write a project plan.  But the vast majority of people thrust into leadership and management roles in tech are largely left to their own devices to succeed.

Let me pause for a moment and highlight a subtle but important point: leadership and management are different skills.  Leadership is creating a vision and inspiring a group of people to go after the vision; management is organizing, equipping, and caring for people as well as taking care of the myriad details needed for the group to be successful.  There’s some overlap, and an effective leader or manager has competence in both areas, but they require different tools and a different mindset. This article is focussed on the leadership component.

So what does it take to develop competent and confident leaders?  When I look at some of the best-in-class “leadership-centric organizations” – militaries and large consulting companies for example – I see the following common elements:

  1. Heavy up-front investment in training
  2. High expectations of the leaders
  3. An reasonably structured environment in which to learn and grow
  4. A continuous cycle in which role models will coach and mentor the next generation

How did this look at Bonial?

Upon arriving I inherited a 40-ish strong engineering organization broken up into five teams, each headed by a “team lead.” The problem was that these team leads had no clear mandate or role, little or no leadership and management training, and essentially no power to carry out a mandate even if they’d had one.

This setup was intended to keep the organization flat and centralize the administrative burden of managing people so as to allow the team leads to focus on delivery. Unfortunately this put the leads in a largely figurehead role – they represented their teams and were somehow responsible for performance but had few tools to employ and little experience with which to effectively deploy them. They didn’t hire their people, administer compensation or manage any budgets. In fact, they couldn’t even approve vacations.  To this day it’s not clear to me, or them, what authority or responsibility they had.

This arrangement also created a massive chokepoint at the center of the organization – no major decisions were made without approval from “above”. The results were demoralized leads and frustrated teams.

Changing this dynamic was my first priority.  To scale our organization we’d need to operate as a federation of semi-autonomous teams, not as a traditional hierarchical organization.  For this we needed leads who could drive the changes we’d make over the coming years, but this would require a major shift in mindset.  After all, if I couldn’t trust them to approve vacations, why should I trust them with driving ROI from the millions of euros we’d be investing in their teams?  Engineers have the potential to produce incredibly valuable solutions; ensuring they have solid leadership is the first and most important responsibility of senior management.

We started with establishing a clear scope of responsibility and building our toolbox of skills.  I asked the leads if they were willing to “own” the team results and, though a little nervous, most were willing.  This meant they would now make the calls as to who was on the team and how those people were managed. They took over recruiting and compensation administration. They played a much stronger role in ensuring the teams had clarity on their mission and how the teams executed the mission. They received budgets for training, team events and discretionary purchases. And, yes, they even took responsibility for approving vacations.

We agreed to align around the leader-leader model espoused by David Marquet (https://www.davidmarquet.com/) in his book “Turn the Ship Around!  We read the book together and discussed the principles and how to apply them in daily practice.  The phrase “I intend to…” was baked into our vocabulary and mentality.  We eliminated top-down systems and learned to specify goals, not methods.  We focussed on achieving excellence, not just avoiding errors.  The list goes on.

I also started a “leadership roundtable” – 30 minutes each week where we’d meet in a small group and discuss experiences and best practices around core leadership and management topics: motivating people and teams, being effective, basic psychology, communicating, coaching and mentoring, discipline, recruiting, personal organizational skills, etc.  Over time, dozens of people – ranging from prospective team leads to product managers to people simply interested in leadership and management – participated in the roundtables, giving us a common foundation from which to work.

As I’ll share in a future article, we also created a career growth model that fully supported a management track as well as a technical track and, most importantly, the possibility to move back and forth freely between the two.  We encouraged people to give management a try and offered mentoring and support plus the risk-free option of being able to switch back to their former role if they preferred.  In the early days this was a tough sell – “team lead” had the reputation of being mostly pain with little upside.  Never-the-less a few brave souls gave it a shot and, to their surprise, found it rewarding (and have since grown into fantastic leads).

It wasn’t easy – we had a fair share of mistakes, failures and redos – but the positive effects were felt almost immediately. Over time this first generation of leads grew their teams and created cultures of continuous improvement. As the teams grew, the original leads mentored new leaders to take over the new teams and the cycle continued. As it stands today we have a dozen or so teams/squads led by capable leaders that started as software engineers, quality assurance pros, system engineers, etc. 

For what it’s worth, I believe the number one factor driving Bonial’s accelerated velocity was growth in leadership maturity. If you’re looking to engineer positive change, start here.

Some closing thoughts:

  • Positive change requires strong leadership.
  • A single leader can start the change process, but large-scale and enduring change requires distributed leadership (e.g. ”leader-leader”).
  • Formal training can be a great source of leadership and management tools, but mastering those tools requires time, a safe and constructive environment and active coaching and mentoring.
  • Growing a leadership team is not a linear or a smooth process.  The person driving and guiding the development must commit to the long game and must be willing to accept accountability for inconsistent results from first generation leads as they learn their trade.

Read part 3: Building Great Teams

Accelerated Velocity: How Bonial Got Really Fast at Building Software

My boss, Max (Bonial Group’s CEO), and I sat down recently for a “year-in-review” during which we discussed the ups and downs of 2017 as well as goals for the new year.  In wrapping up the conversation, I shared with him my gut feeling that velocity and productivity had improved over the past couple of years and were higher than they’d ever been at Bonial – perhaps as much as double when compared to 2014.  

He asked if I could quantify the change, so on a frigid Sunday a couple of weeks ago I sat down with a mug of hot tea and our development records to see what I could do. We’ve used the same “product roadmap” format since 1Q14 (described here), which meant I could use a “points” type approach to quantify business value delivered during each quarter.  As I was looking for relative change over time and I was consistent in the application, I felt this was a decent proxy for velocity.  

It took me a couple of hours but was well worth the effort.  Once I’d finished scoring and tabulating, I was pleasantly surprised to find that I’d significantly underestimated the improvements we’d made.  Here’s a high level overview of the results:

7X Velocity! Bonial team size, value delivered and productivity over time.

The net-net is that in 1Q 2018 we’ll be delivering ~630% more business value than we delivered in the first quarter of 2014, largely driven by the fact that each person on the team is ~250% more productive.  

Sweet.

The obvious next question: how did we do this?

The short answer is that there is no short answer.  There was no single magic button that we pushed to set us on this path to accelerated velocity; this was a long campaign that started small and grew, eventually spanning people, process, technology and culture.  Over time these learnings, improvements, changes and experiments – some large, some small, some successful, some not – built on each other and eventually created an environment in which the momentum sustained itself.  

Over the next few weeks I’ll summarize the major themes here in this blog for both myself as well as anyone who’s interested.  Along this journey I plan to cover (and will link when available):

  1. Building Leaders
  2. Building Great Teams
  3. Creating Situational Awareness
  4. Providing a Growth Path
  5. Enabling Independent Action
  6. Clarifying Processes and Key Roles
  7. Creating an Architecture Runway
  8. Optimizing the SDLC with DevOps
  9. Getting Uncomfortable
  10. Doing the Right Things

Each of those topics could alone make for a small book, but I’ll try to keep the articles short and informative by focussing only on the most important elements.  If there’s an area in which you’d like me to dig deeper, let me know and I’ll see what I can do.  Assuming I get through all of those topics I’ll wrap things up with some final thoughts.

So let’s get started with part 2: Building Leaders

How we Plan at Bonial (part 3)

Collaborative digital stickies board that we use for planning.

Ok, after all that, how do we actually plan at Bonial?

The heart of our planning activities is the Quarterly Planning which is loosely modeled on Program Implement (PI) Planning from SAFe.  During quarterly planning / PI planning, everyone in the product development organization – developers, designers, architects, testers, product managers, operations specialists, designers, etc. – get together for a couple of days to map out their next phase.  We do our planning during the previous quarter’s HIP (Hardening Innovation and Planning) sprint, which is sprint 6 of each quarter.

Before I dive into the actual planning days, I should point out that the preparations start several weeks before when the product teams actively work with stakeholders, customer facing teams and the executive team to validates the backlogs against the current company priorities and business realities.  The prep phase looks something like this:

  • The senior management team and product strategy board review the overall strategy and primary business goals to assess if any change in focus is needed.  
  • Next we make sure that product and delivery management has the same level of clarity. We get the delivery leads and product owners together and communicate the company goals for the upcoming quarter to them, taking the time to answer questions about strategy, challenges, current market trends etc. Our goal here is to make sure that all our leaders are able to bring clarity to their teams so that local decisions are made with the right context.
  • 3-4 weeks before the planning event, the product management team starts curating the backlogs for the different product and system streams.  They create a “long list” of major features and work items and meet with stakeholders, customers and Bonial management to validate priorities. 
  • A week before planning the “long lists” are reduced to “short lists” of the highest priority items. This is probably the hardest part of the process and it requires saying “no” to things… we find that our stakeholders and customers all agree that discipline is needed so long as it mostly impacts other stakeholders and customers.  Over the years we’ve tried various formal mechanisms to prioritization – Weighted Short Job First, Feature Bucks, etc. – but in the end we find that different tools are needed for different situations and that, with experience, people often people intuitively know the order.
  • Over the next week the product team spends time working through open questions and details while architects and engineers do the same on the technical side.  There’s also generally some intense discussions about “bubble” items – features that are right on the cusp of making the list – as well as hot items that didn’t make the list.

I wish I could say that this process was easy.  The truth is that a great deal changes in three months – new opportunities and challenges, unexpected curveballs – so we’re constantly challenged to re-assess our priorities with each planning cycle.  On top of that there’s a lot we want to do, so we find ourselves often having hard discussions up until the planning day, especially around the “bubble items”.  It’s not clear to me that there’s a much easier way – we’re in a fast industry and a complex business – but we try to get better each quarter.

So the primary inputs to planning are a short, discreet, prioritized set of epic-sized initiatives for each team.  Most of these are functional but there are usually some architectural or operational topics as well.  That brings us now to the actual planning days (typically a Th/F):

  • On planning day 1, we start with a team breakfast at 0900 and then a kickoff presentation at 0930.  The kickoff presentation covers the big picture goals for the quarter and a quick review of each team’s focus and top items so everyone has context.  We also cover logistics – where they can find flip-charts and stickies, who’s in which rooms, etc.
  • Following the kickoff (and the kitchen cleanup), the teams go to their planning spaces and get started.  Basically, they start with the top priority item, plan it through to completion, and then repeat with the next item.  Once they get to the allocated capacity they stop planning.  The remaining items simply don’t get done.
Teams plan with flip charts for each sprint and colored stickies for tasks, milestones, etc.
  • “Full capacity” is an interesting and oft debated question.  We have a loose agreement that teams should reserve ~20% for bugs and team discretion and should reserve another ~20% for refactoring and architecture work. 
  • As the teams are planning they’re also working with other teams on inbound and outbound dependencies.  We’ve organized the teams to minimize dependencies but they’re still a fact of life.  The teams negotiate how to support each other based on overall priorities and goal (ref. the “context” from the breakfast).  Any un-resolved conflicts are escalated or raised at the review meeting (below).
  • At 4PM on the first day the scrum masters and other delivery managers get together to share their current plans with the group.  We use a web-based collaboration tool that allows each team to put virtual stickies on their assigned row with different colors illustrating milestones, spikes, tasks, releases, etc.  Dependencies are made visible by connecting two stickies with a line.  
Teams gather to review the day 1 draft plan.
  • Putting everything together allows us to visualize the major streams, see what made the cut and what didn’t, and address any dependency challenges or conflicts.  Generally there are several to-dos coming out of the review, primarily around working through dependencies or going to business stakeholders for clarification.
  • The morning of day 2 is primarily for making adjustments from the previous day, collaborating with other teams where combined efforts are needed and tying up loose ends.  Most teams wrap this up pretty early and then get back to their HIP sprint, others need most or all of the day.  
  • At 4PM on day two we grab a beer and get back together in front of the stickies board to review any changes from the previous day and discuss any unresolved conflicts.  This exercise typically goes much faster than the day 1 review.  At the end we check confidence and then head home for a much needed break.

Here’s the final plan from last quarter.  

Q2 final plan

It looks complex and it is complex.  Without developing our process, our teams and ourselves over the last couple of years we’d be hard pressed to effectively manage this complexity.

Following the planning we package up the plan and communicate a high level, consumable version for to the business and stakeholders.  We emphasize that these are our current targets and best estimates – this isn’t a contract.  We’ll do everything we can to stick to it but we may be surprised or, in good agile fashion, we may decide to make changes as the situation evolves.

So that brings us full nearly full circle.  I started this series during our last planning days and expected it to be a quick post.  As I pulled the thread, however, I realized how much work had gone into our evolution in this area.  I could also see that a high-level flyover would leave huge gaps in the journey, so I decided to fly lower.   

You can see by now that undertaking a journey like this takes a fair amount of time, experience and honest self-evaluation, regardless of the specific methodology you choose.  That said, the investment is worth it, and a great deal of value can be realized even early in the process.

In Bonial’s case, we had a few advantages as we set off on the journey.  First, everyone was open to change, even when the change made them nervous.  The importance of this can’t be overstated.  I’ve lost count of the organizations I’ve worked with in which the teams had no motivation to improve (though paradoxically most of them complained constantly about the status quo).  In the end the team has to want or at least be willing give it shot.  Which brings us to point two…

Second, we had good people and a healthy culture.  Where we lacked in experience and skills, we more than compensated by having a team of smart, energetic professionals.  With good people, you can generally solve any problem. 

Last, but not least, we have a skilled, SAFe-trained Release Train Manager to drive the process (though her role has evolved).  Even the finest orchestras of the world don’t play on they own- they have a conductor.  In our case the conductor/RTE ensures:

  • The stage is set. Everybody knows the timing, their roles and the rules of the game and All the needed supplies are in place and easily accessible to everybody.
  • Short (really short!) list of candidates for planning is finalized before we start.  The RTE ensure we’re observing Work in Progress (WIP) constraints, which are critical to maximizing throughput.  As she often says, “Let’s stop starting things and start finishing things instead.”  
  • People know who to go to regarding priorities and impediments during planning.
  • The planning is properly wrapped up, all roadmaps and agreements put together, and outcomes are properly communicated to all key stakeholders.
  • Solid retrospectives are done both on the quarter itself as well as the planning process so we can continue improving.

Whew!  That was a lot of writing for me and reading for you.  Kudos if you made it this far – I hope it was worth it.  So now you know how we do it – feel free to share your own stories about how you and your teams plan.  Best of luck in your own journey!

(Special thanks to Irina Zhovtobrukh (the mysterious RTE) for her contributions to this post as well as teaching us how to “conduct” better planning evolutions.)