Fear Factor: Guns vs Burgers

 

In my previous article I analyzed the statistics around terrorism vs gun deaths and found that, at least as of 2015,  Americans have a higher probability of dying at the hands of another American with a gun than a European has of being killed in a terror attack.  I also noted that the risk seemed to be inversely proportional to the fear.

Stepping back further, let’s look more broadly at how terrorism and gun deaths compare to other preventable causes of death:

At over 480,000 deaths per year, smoking dwarfs the deaths caused by either guns or terrorism in the US (even, it must be noted, when considering the ~3,000 deaths caused by the 9/11attacks).  Obesity is rapidly overtaking smoking at 374,000 and rising.  It seems that Americans should fear Big Macs and Marlboros far more than terrorists.  

As with guns vs terror, I can’t help but note that the fear factor seems to be almost directly inverse to the risk – Americans seem to be mortified by terrorists, afraid of guns, and relatively indifferent to the rest.  Why so irrational?  I’m afraid that answering that question is likely out of the realm of data science and more in the realm of psychology or evolutionary biology.  It does call to mind, however, the argument made by Levitt and Dubner in Freakonomics when discussing the statistics around swimming pools vs guns.  If I recall correctly they use the term “dread” to describe the emotion that drives some irrational choices.  Perhaps the same thing is going on here – the idea of a truck slamming into a joyful Christmas market creates more dread than the somewhat abstract idea of dying from obesity or smoking.  

One final observation about guns in America – people are horrified when mass shootings happen but as a society they choose to do nothing to prevent the next massacre.  This is in marked contrast to terrorism in which the same people are willing to spend billions of dollars, close the borders and sacrifice civil liberties to prevent the next terrorist attack.  It seems to me the dread factor is high in both cases, but the response is highly asymmetrical.  Perhaps a topic for another day…

Fear Factor: Guns vs Terrorism

I‘ve been pretty quite here recently – some intense projects and my travel schedule haven’t left me much time to write.  I do have a few half-written posts that I’ll try to finish up soon.  In the meantime, here’s a short series that veers a bit from pure technology and into the interconnected realm of data analytics and social sciences…

A few weeks ago I was talking to a family member in the U.S. (I’m a U.S. citizen currently living in Germany) and we were discussing the recent spate of weather and other natural disasters that were hammering the states. When we were done he said, “Well as crazy as it is here I’d take this any day over what you’re dealing with.”

I was a bit confused, and asked what disaster he was referring to. He clarified, “No, I mean all of the terrorists driving trucks into crowds and setting off bombs on trains and stuff.”

Ah, right. I’ve heard similar statements several times since I moved to Europe and never quite understood them – after all, while horrific, the sheer number of terror related deaths in either Europe or the U.S. is in the dozens or low hundreds; I was pretty confident that the probability of being a victim of a terrorist is far lower than many other forms of violent crime or preventable death. I replied, “You know, there are more gun deaths each day in the US than terrorism deaths in Europe every year. What you should be afraid of is walking out your door.”

Not surprisingly, we agreed to disagree and the conversation ended cordially. However, it got me thinking: Was I right that someone in Europe is less at risk from an Islamic (or other radical) terrorist than an American is from another American with a gun?  If not, why is the fear factor from terrorism so much greater than gun violence?

The first question sounded like a straightforward data analytics exercise, so I busted out a Jupyter notebook to explore, grabbed some data and challenged the hypothesis.

To analyze terrorism I chose the Global Terrorism Dataset (GTD), a very comprehensive collection of worldwide terrorism over the last half century. The gun violence datasets were harder to come by, in part due to the successful lobbying efforts by the National Rifle Association (NRA) which blocks government research on gun violence, so I chose to work with the Centers for Disease Control (CDC) Multiple Causes of Death dataset which classifies all deaths in the US, including deaths by firearms. The latest year that the GTD and CDC set fully overlap is 2015, so I chose that as the year to focus on.

Terrorism 

Let’s start by looking at terrorism.  Worldwide, there was a significant spike in terrorism over the most recent decade, with the vast majority of the increase coming from the middle east, Africa, and south Asia.

 

If we zoom into this decade and look only at the US and Western Europe, this is what we see:

Look at the Y axis on both of the above graphs – it’s clear that it’s much safer to be in Europe or the US that many other parts of the world (two orders of magnitude safer). While Europe has seen a relative spike in terrorism related deaths since the end of 2015, it also has roughly double the population of the US so to get a better picture of how this compares to US deaths we need to look at deaths per million residents. Here’s what we get:

2015 terror deaths EU: 171.0 total, or 0.23 per million residents
So in 2015 a European had roughly a 1 in 4,000,000 chance of dying in a terrorist attack. That sounds pretty small.  Just out of curiosity, I wonder how that compares to terrorist attacks on American soil:

2015 terror deaths US: 44.0 total, or 0.14 per million residents
I hate to write this because some knucklehead will quote it out of context, but on the surface Europeans have roughly twice the probability of being terror victims than Americans when adjusted for population (in 2015 at least). But that’s like saying a person is twice as likely to be killed by a bear than by a shark – both numbers are so low that doubling either is still a low number.  (In fact, the odds of dying in a shark or bear attack aren’t too far off than death by a terrorist, but that’s for another article.)
Let’s look at the other side of the problem.

Gun Deaths in the U.S. (round 1)

Ok, how does that compare to the risk of dying from a gun in the US? Here’s a high-level breakdown of US gun deaths in 2015:

suicide     22060
homicide    13018
accident      489
other         284

The rough numbers/ratios above have been quoted quite a bit over recent years – roughly 35K gun deaths per year with ~1/3 homicides and ~2/3 suicides – so no big surprises there. 
 
Since terror attacks are essentially homicides, let’s look at gun homicides per million so we can compare with the terrorist threat:

2015 gun homicides US: 13018 total, or 40.29 per million residents
So, at ~40 gun homicides per million residents, an American is ~175x more likely to die from a gun homicide in the US than a European is from a terrorist in Europe.  Hmm.
 
But… it could be argued that this isn’t a fair comparison.  I’ve heard several arguments that have gone something like this: “Terrorists tend to strike random, killing innocent, unsuspecting victims.  U.S. gun violence mostly happens in places like Chicago, St Louis and Detroit and involves gangs and criminals.  In other words, U.S. gun violence is about ‘them’, and we’re not ‘them’.”
 
So how can we whittle the dataset down to “not them”?

Gun Deaths in the U.S. (round 2) 

Let’s see what we can find as we drill into the CDC data…

On an absolute basis, American men are ~6X more likely to be victims of gun violence, while on a percentage basis, men and women have similar levels of root cause, with suicide being the major contributor.

How about race?

The differences here are striking – blacks and hispanics are far more likely to die from homicide while whites are overwhelmingly likely to take their own life. To get a different perspective, let’s look at this on a percentage basis:

Again, some striking differences in intent between different racial groups.  (My gut tells me the homicide rate roughly correlates with average income level, but that’s an analysis for another day.)
 
Ok, maybe education plays a role, either directly or as a proxy for socio-economic status:

Again, a pretty strong correlation.

And now let’s look at age. Here are two views, one broken down by intent and the other by race:

(Note: the bump around 50 is due to a spike in white male suicide…  Remember, remember the month of Movember…)

While tragic, the suicides, accidents and undetermined cause events aren’t relevant to this analysis so we’ll exclude those to focus exclusively on homicides and revisit the age vs race graph in this light:

So, it appears that gun deaths skew heavily towards young black and hispanic males without college degrees.  It feels wrong removing men from the equation since most of the comments I’ve heard relating to this hypothesis have come from men, so let’s just filter on the other dimensions and look at whites over 30 with college degrees:

2015 gun homicides US (white, over 30, college degree): 392 total, or 1.21 per million residents

So even this limited demographic is still ~5X more likely to die from gun in the US than a European is from a terrorist attack.

Conclusion

Ok, let’s review:

  • In 2015 a person in Europe had less than one in a million chance of being killed by a terrorist.
  • That same year, a person in the U.S. had a probability up to 40 in a million of being killed by another American with a gun.

At this point I think I can be pretty confident that my original hypothesis is correct: an American is at much higher risk of being killed by another American with a gun than a European is of being killed by a terrorist.

In the course of exploring this data I have to admit I was surprised at some of the things I found and want to explore them further – for example:

  • What’s going on with terrorism in the rest of the world?
  • How does the casualty rate from guns and terrorism compare with other preventable deaths?
  • Why is the fear factor orthogonal to the reality of the actual risks? 

Stay tuned.

(If interested, you can look at the code of the analysis on this Kaggle kernel.)

 

What I learned from writing an AI voice assistant and chat bot

I have a confession: despite being in management I still love to code.  Since I don’t get to program as much as I’d like or stay up on the latest trends and technologies, I set a goal for myself to learn at least one new technology every year (and more than one on a good year).  This learning hobby is how I made the leap from back-end to full-stack developer, how I learned iOS and Android, and how I stepped into the hallowed halls of Data Science.

This year I decided to explore chat bots and voice assistants.  As I learn best by doing, I generally think up a fun or useful project and then learn through building it.  For this project I decided to tackle an unending source of stress in my household: bickering and arguing over screen time for our kids.  

Enter ChronosBot

The idea behind ChronosBot is simple.  Parents set up screentime accounts for each child as well as an an automatic allowance that puts time in the accounts.  After linking their account to Alexa, Google Assistant, Facebook Messenger, etc., they can say or write things like, “Alexa, ask ChronosBot to withdraw 30 minutes from Axel’s account” or “… what’s everyone’s balance?”

With the idea in place, I had to choose my tech stack.  Google has a robust platform built on API.AI.  API.AI supports a dozen or so chat integrations (Allo, Messenger, Telegram, Kik, etc.) as well as a voice interface for Google Home, allowing developers to (theoretically) write one interface for both voice and chat.  At the time I started, Amazon Alexa had a rudimentary platform for speech dialog development using structures text.  In both platforms the interface designer creates “intents” that match what the user says to something the bot can do and then provides appropriate responses, and both platforms hand off the business logic to a backend app using web hooks.

For the backend, I decided to sharpen my python skills and implement in Django on top of Postgres.  For deployment I decided to give Heroku a try.  

Development of the basic use cases took me a couple of weeks in the late evening and weekends.  I submitted to both Amazon and Google and waited for a week or so in each case for the review.  Both rejected my app, but for reasons that I hadn’t expected.  Amazon told me that my app violated the Alexa ToCs because it “targeted children” (huh?) and told me to not resubmit the app ever again (seems they relented).  Google gave me the boot because my invocation name couldn’t be recognized properly but a very helpful person from Google worked with me to resolve the issue and now it’s live.  

I’ve since continued development and added new features like “rewards” and “penalties” (requested by my wife) and “mystery bonus” (requested by the kids).  I’ve enabled Telegram and Messenger and have adapted the platform to support both visual and audio surfaces.  And the Alexa version was finally approved earlier this week.

Lessons Learned

So, what have I learned while navigating the ins and outs of the Google and Alexa development platform and publication process? 

1)  Amazon and Google have very different approaches.  Google has taken the bold approach of enabling all community developed actions and using an intent matching algorithm to route users to the correct action.  Amazon requires users to enable specific skills via a Skill Store.  In both cases, discovery is a largely unsolved challenge.

2)  Too early to tell who will be king.  Amazon Alexa has a crazy head start, but Google seems to be a more robust speech development platform.   With a zillion Android devices already on the market one certainly can’t count them out.  On the other hand not a month seems to go by without a new Alexa form factor hitting the market.  

3)  It’s early days.  Both platforms are being developed at a lightning fast pace.  Google had a big head start with API.AI.  The original Alexa interface was frustratingly primitive, but they’ve since upgraded to a new UI (which suspiciously bears a strong resemblance to API.AI) that has great promise.  

I have to take my hat off to both companies for creating a paradigm and ecosystem that makes voice assistant and natural language development accessible to the broader development community.  It’s so straight forward that even my kids gave it a try – my daughter (10) developed “The Oracle” that answers deeply profound questions like “Who’s awesome” (she’s awesome).  My son (12) wrote a math quiz game with which he is happy to challenge anyone to beat his top score. 

4)  Conversational UX is easy; good conversational UX is really hard.  I’ve known this since I was involved with Nuance and the voice web in the late 1990’s (and I also happen to be married to an expert in the space).  Making it easy to build a conversational UX is a very different thing than helping developers build a high quality conversational UX (especially a Voice UX).  Both Amazon and Google have tried to address this with volumes of best practice documentation, but I expect most developers will ignore it.

5)  Conversational UX is limited.  There are some use cases that work for serial interactions (voice or chat) and some that work better in parallel interactions (visual).  Trying to force one into the other typically doesn’t make sense or only applies to “desperate users”.  You see the effect of this to some degree already in the Alexa Skill Store – there are some clear clusters evolving (home automation, information retrieval, quiz games).

6)  Multi-modal UX is the next natural step.  I’m very excited about the Amazon Echo Show as I expect that will unleash a wave of interesting multi-modal interaction paradigms.  

7)  It’s fun.  There’s just something about the natural language element of voice assistants that allows for a richer, more human interaction than what GUIs can provide.  

All in all I’m really excited about the potential of this space, and I’m not alone – just look at the growth of the Alexa Skills Store.  The tech press is also taking a critical look at these capabilities (e.g. a recent article featuring yours truly) and I expect most companies are at least thinking about how these capabilities will play in their business.  My company, Bonial, is investing in several actions/skills to explore the potential of voice and chat interfaces.  To date we’ve already launched a bot that allows users to search for local deals and will shortly launch a voice assistant interface to our shopping list app, Out of Milk.  We’ve learned a lot and we’ll share more on those projects in other posts.  

How we Plan at Bonial (part 3)

Collaborative digital stickies board that we use for planning.

Ok, after all that, how do we actually plan at Bonial?

The heart of our planning activities is the Quarterly Planning which is loosely modeled on Program Implement (PI) Planning from SAFe.  During quarterly planning / PI planning, everyone in the product development organization – developers, designers, architects, testers, product managers, operations specialists, designers, etc. – get together for a couple of days to map out their next phase.  We do our planning during the previous quarter’s HIP (Hardening Innovation and Planning) sprint, which is sprint 6 of each quarter.

Before I dive into the actual planning days, I should point out that the preparations start several weeks before when the product teams actively work with stakeholders, customer facing teams and the executive team to validates the backlogs against the current company priorities and business realities.  The prep phase looks something like this:

  • The senior management team and product strategy board review the overall strategy and primary business goals to assess if any change in focus is needed.  
  • Next we make sure that product and delivery management has the same level of clarity. We get the delivery leads and product owners together and communicate the company goals for the upcoming quarter to them, taking the time to answer questions about strategy, challenges, current market trends etc. Our goal here is to make sure that all our leaders are able to bring clarity to their teams so that local decisions are made with the right context.
  • 3-4 weeks before the planning event, the product management team starts curating the backlogs for the different product and system streams.  They create a “long list” of major features and work items and meet with stakeholders, customers and Bonial management to validate priorities. 
  • A week before planning the “long lists” are reduced to “short lists” of the highest priority items. This is probably the hardest part of the process and it requires saying “no” to things… we find that our stakeholders and customers all agree that discipline is needed so long as it mostly impacts other stakeholders and customers.  Over the years we’ve tried various formal mechanisms to prioritization – Weighted Short Job First, Feature Bucks, etc. – but in the end we find that different tools are needed for different situations and that, with experience, people often people intuitively know the order.
  • Over the next week the product team spends time working through open questions and details while architects and engineers do the same on the technical side.  There’s also generally some intense discussions about “bubble” items – features that are right on the cusp of making the list – as well as hot items that didn’t make the list.

I wish I could say that this process was easy.  The truth is that a great deal changes in three months – new opportunities and challenges, unexpected curveballs – so we’re constantly challenged to re-assess our priorities with each planning cycle.  On top of that there’s a lot we want to do, so we find ourselves often having hard discussions up until the planning day, especially around the “bubble items”.  It’s not clear to me that there’s a much easier way – we’re in a fast industry and a complex business – but we try to get better each quarter.

So the primary inputs to planning are a short, discreet, prioritized set of epic-sized initiatives for each team.  Most of these are functional but there are usually some architectural or operational topics as well.  That brings us now to the actual planning days (typically a Th/F):

  • On planning day 1, we start with a team breakfast at 0900 and then a kickoff presentation at 0930.  The kickoff presentation covers the big picture goals for the quarter and a quick review of each team’s focus and top items so everyone has context.  We also cover logistics – where they can find flip-charts and stickies, who’s in which rooms, etc.
  • Following the kickoff (and the kitchen cleanup), the teams go to their planning spaces and get started.  Basically, they start with the top priority item, plan it through to completion, and then repeat with the next item.  Once they get to the allocated capacity they stop planning.  The remaining items simply don’t get done.
Teams plan with flip charts for each sprint and colored stickies for tasks, milestones, etc.
  • “Full capacity” is an interesting and oft debated question.  We have a loose agreement that teams should reserve ~20% for bugs and team discretion and should reserve another ~20% for refactoring and architecture work. 
  • As the teams are planning they’re also working with other teams on inbound and outbound dependencies.  We’ve organized the teams to minimize dependencies but they’re still a fact of life.  The teams negotiate how to support each other based on overall priorities and goal (ref. the “context” from the breakfast).  Any un-resolved conflicts are escalated or raised at the review meeting (below).
  • At 4PM on the first day the scrum masters and other delivery managers get together to share their current plans with the group.  We use a web-based collaboration tool that allows each team to put virtual stickies on their assigned row with different colors illustrating milestones, spikes, tasks, releases, etc.  Dependencies are made visible by connecting two stickies with a line.  
Teams gather to review the day 1 draft plan.
  • Putting everything together allows us to visualize the major streams, see what made the cut and what didn’t, and address any dependency challenges or conflicts.  Generally there are several to-dos coming out of the review, primarily around working through dependencies or going to business stakeholders for clarification.
  • The morning of day 2 is primarily for making adjustments from the previous day, collaborating with other teams where combined efforts are needed and tying up loose ends.  Most teams wrap this up pretty early and then get back to their HIP sprint, others need most or all of the day.  
  • At 4PM on day two we grab a beer and get back together in front of the stickies board to review any changes from the previous day and discuss any unresolved conflicts.  This exercise typically goes much faster than the day 1 review.  At the end we check confidence and then head home for a much needed break.

Here’s the final plan from last quarter.  

Q2 final plan

It looks complex and it is complex.  Without developing our process, our teams and ourselves over the last couple of years we’d be hard pressed to effectively manage this complexity.

Following the planning we package up the plan and communicate a high level, consumable version for to the business and stakeholders.  We emphasize that these are our current targets and best estimates – this isn’t a contract.  We’ll do everything we can to stick to it but we may be surprised or, in good agile fashion, we may decide to make changes as the situation evolves.

So that brings us full nearly full circle.  I started this series during our last planning days and expected it to be a quick post.  As I pulled the thread, however, I realized how much work had gone into our evolution in this area.  I could also see that a high-level flyover would leave huge gaps in the journey, so I decided to fly lower.   

You can see by now that undertaking a journey like this takes a fair amount of time, experience and honest self-evaluation, regardless of the specific methodology you choose.  That said, the investment is worth it, and a great deal of value can be realized even early in the process.

In Bonial’s case, we had a few advantages as we set off on the journey.  First, everyone was open to change, even when the change made them nervous.  The importance of this can’t be overstated.  I’ve lost count of the organizations I’ve worked with in which the teams had no motivation to improve (though paradoxically most of them complained constantly about the status quo).  In the end the team has to want or at least be willing give it shot.  Which brings us to point two…

Second, we had good people and a healthy culture.  Where we lacked in experience and skills, we more than compensated by having a team of smart, energetic professionals.  With good people, you can generally solve any problem. 

Last, but not least, we have a skilled, SAFe-trained Release Train Manager to drive the process (though her role has evolved).  Even the finest orchestras of the world don’t play on they own- they have a conductor.  In our case the conductor/RTE ensures:

  • The stage is set. Everybody knows the timing, their roles and the rules of the game and All the needed supplies are in place and easily accessible to everybody.
  • Short (really short!) list of candidates for planning is finalized before we start.  The RTE ensure we’re observing Work in Progress (WIP) constraints, which are critical to maximizing throughput.  As she often says, “Let’s stop starting things and start finishing things instead.”  
  • People know who to go to regarding priorities and impediments during planning.
  • The planning is properly wrapped up, all roadmaps and agreements put together, and outcomes are properly communicated to all key stakeholders.
  • Solid retrospectives are done both on the quarter itself as well as the planning process so we can continue improving.

Whew!  That was a lot of writing for me and reading for you.  Kudos if you made it this far – I hope it was worth it.  So now you know how we do it – feel free to share your own stories about how you and your teams plan.  Best of luck in your own journey!

(Special thanks to Irina Zhovtobrukh (the mysterious RTE) for her contributions to this post as well as teaching us how to “conduct” better planning evolutions.)

How we Plan at Bonial (part 2: competence)

In the previous two posts I talked about the importance of clarity and control, but even perfect clarity and unlimited control will likely still lead to failure and frustration if the team isn’t ready to take on these new responsibilities. That’s where Competence comes in.

To build competence across the team we invested in experienced practitioners as well as training and mentoring. We hired a talented SAFe-trained development manager (“Release Train Engineer” in SAFe parlance) to both lead our transformation as well as provide training and mentoring.  We brought in agile and SAFe trainers for multi-day training sessions on team and enterprise agile (more on SAFe in later posts).  We started leadership and management training for our product owners, new team leads and lead developers. The more experienced members of the team actively coached others in best practices.

Why go through all this trouble?  Simple – a common source of failure I’ve seen over the years is this: the fantasy that calling something ‘agile’ somehow makes it agile.  Too often I’ve seen organizations slap on the label of “scrum teams,” appoint a newly hired Scrum Master or Agile Coach, tell them to have stand-ups and sprints, and then hope that “agile happens”… a.k.a. “fake it until you make it”.  Good luck.  Like it or not, you have to invest in training, excellent people and experienced leadership.

A word of advice: don’t skimp on the training. Our first training session involved a half-day session for only key leaders. As we quickly learned, that’s not training – that’s just a teaser.  Frankly I was part of the problem – I needed to shift my attitude and accept that, unless the whole team is on-board and up-to-speed, we’d never be able to run a full speed.  Yes, it was expensive in both time and money, but necessary.  We’ve since opened up both the breadth and depth of the training.

We also learned by doing. We built on a strong culture of open and honest retrospectives and we actively shared the learnings between teams. We experimented with new techniques and, when they worked, spread them throughout the organization. We actively cultured an environment of “low fear” so that people had space to learn and grow.

As a management team, we also worked hard to “specify goals, not methods” as part of the shift away from the Roadmap Committee described in the previous post. Why is this a competence topic? Because by forcing ourselves to stay out of the details we provided space for the teams to learn and grow. This also opened up room for lots of great ideas that may never have been voiced in a top-down approach.

Key takeaway: invest in training and regular, iterative experiential learning. Put your teams in positions where they need to stretch their knowledge and experience so that they have the context and confidence going forward to execute the mission (but actively support them as they learn).  And, as always, hire and retain great people.

One thing before we get back to the original topic – as I re-read these last three posts I can see how a reader might get the sense that we executed smoothly via a carefully orchestrated plan.  Not so.  There was trial-and-error, plenty of course adjustments and a mix of successes and failures.  That’s ok – it takes time.  What’s important is keeping your eye on the ultimate goal, being realistic and working together as a team to make it happen.

Ok, after a long detour through the background, back to the original topic…

How we Plan at Bonial (part 2: control)

blue angels - extreme control
Blue Angels – extreme control

As you read in the previous post, we shed some light on what we were (and weren’t) doing with some simple Clarity mechanisms with regards to planning our software development.  Now we needed to make sure everyone knew who should be doing what – a.k.a. Control. 

We started with a new roadmap governance process.  We knew that if we wanted to scale the organization we had to fundamentally rationalize the “roadmap committee”.  To that end we developed the following decision flow chart:

Bonial’s first update to roadmap governance

Though it appears complex, it’s built around a single principle: push as many decisions to the teams as possible.  The “roadmap committee” would be responsible for major strategy and funding decisions and for monitoring progress; the teams would execute under the broad guidance from the committee.  

This shift to distributed control was fundamental to our later growth and success but the truth is that it took the better part of a year until we “got it right-ish”.  It was an iterative process of building trust on all sides – management had to trust the teams to make good decisions, the teams had to trust management to provide clear guidance and hold to it, and the stakeholders had to trust both.  But it was worth it.  

Most importantly, the teams began to “own” their mission which changed everything. 

The Roadmap Committee has long since been replaced with other more focussed and lighter-weight mechanisms, but the principles still hold true – executive management sets the goals, allocates resources and provides experience and mentoring; the teams decide how to achieve the goals and execute.  We continue to explore different organizations and alignments to optimize our software development and delivery and we assume we’ll continue to experiment as we grow and our missions changes.

Another major step we took that impacted both control and clarity was to align our teams into Value Streams.  In our effort to improve how we applied Lean and Agile principles at the team and group levels, we decided to adopt best principles from the Scaled Agile Framework (SAFe) for software development at the enterprise level.  SAFe teams are built around “Programs” or “Value Streams” that allowed teams to focus on a specific portion of the mission and operate as independently as possible.  We deviated quite a bit from pure SAFe and formed three streams around our user facing efforts, our business systems and our operations initiatives.  Never-the-less the benefits were immediate as we reduced “prioritization hell” which is what I call the often fruitless act of trying to compare a revenue generating topic with, for example, a cost savings or security topic. 

Key takeaway: it’s impossible to both scale and maintain central control.  Effective scaling requires creating semi-autonomous, fully-capable teams organized to be relatively independent and provided with the clarity needed to tackle their mission.  This can be a tough step, especially in organizations with a long history of central control, but it’s a step that must be taken.  (FWIW I’ve seen the opposite and it’s not pretty.)

So now we knew what we were doing and who should be doing what.  We were getting a lot closer, but we had one more big step…

How we Plan at Bonial (part 2: clarity)

Clarity

How do you go about fixing something that requires you to change almost everything you do?  As described in part 1, this was the situation we faced at Bonial in late 2014 when it came to the governance and execution of our product development roadmap.  

Rather than re-inventing the wheel, we took advantage of proven play books – one for organization change and one for enterprise agile.  On the organizational side, we knew that the “top down, centralized control” model was already strained and would not scale.  So we leveraged elements from the (fantastic) book “Turn this Ship Around!” by Capt. David Marquett, which describes one organization’s journey from a top-down leadership structure to a “leader-leader” structure with distributed ownership and control.  Bonial would have to undergo a similar transformation – we needed everyone to be engaged and feeling ownership if we were to realize rapid transformation and scale. 

In the book, the author presents a couple dozen excellent leadership mechanisms and groups them under three high-level categories – Clarity, Control, and Competence.  In the interest of brevity, I’ll describe just a few of the things we did to improve in these categories.  (I’ll also break them up over several posts.)

Starting with clarity, we began with the simplest exercise possible: we documented all of the work-in-progress on one list.  Absurdly basic yet profound.  We created the first draft by literally going from team to team and asking them what projects were in progress and putting them in a Google Sheet.  (Why this format?  Because normalizing and adapting the existing tracking tools (Jira, Trello) would have taken far too long and wasted the team’s time and energy.  Also, Google Sheets allow for simultaneous editing which is critical for collaboration.)  To make this relevant for business stakeholders, we then dropped the small “story” and “task” level items and broke down the “saga” level items so that the resulting list was at a meaningful “epic” or “project” level.

Here’s a snap of an archived copy of the first version:

Screenshot of first Bonial Roadmap on Google Sheets

This exercise had several immediate impacts.  First, it showed our stakeholders that the engineering team was actually working on a quite a few projects and began to restore some confidence in the product development function.  Second, it shed light on all the projects and prompted a number of valid and constructive questions as to priorities and business justifications for the projects.  This in turn led directly to our decision to do more formal and intentional planning: we wanted to ensure that our engineering resources were “doing the right things,” not just “doing things right.”

Over time this simple Google Sheet has grown to be the primary tool for viewing and communicating the current quarter’s roadmap development.  We populate the sheet with the output of each quarter’s planning exercise (more on that to follow).  Twice a week we review the status of all items (red, yellow, green) and discuss as a team what we can do to adjust if needed.  The same spreadsheet is publicly available to all stakeholders for full transparency.  We’ve considered several times moving to more sophisticated (and expensive) tools but each time we decided that the Google Sheets does everything we need.

Key takeaway: it’s hard to plan if you don’t know what you’re already doing.  Take the time to get clarity on what’s happening, tune it to the right granularity, and ensure there’s full transparency.

In the next post I’ll talk about how we approach mechanisms for control. 

How We Plan at Bonial (Part 1: the early days)

Today is the first day of our quarterly planning ritual here at Bonial.  As I write this the teams are huddled away passionately discussing, digesting, challenging, and estimating their candidate work items.  We have over a hundred people from 25 different countries and multiple offices working through dozens of epics.  By tomorrow we’ll have a solid plan agreed upon by the engineers, designers, testers, data scientists, operations specialists and product managers as well as their stakeholders.  

It wasn’t always like this.

When I arrived at Bonial a couple of years ago, there was no documented roadmap or cohesive prioritization process.  The planning horizon ranged from intra-day for emergencies to a couple of weeks for most other items.  No-one had a clear understanding of what we were working on and why.  The stakeholders didn’t trust engineering and everyone was unhappy.

Getting from there to here hasn’t been easy.  Over the next few posts I’ll walk you through how we got to where we are today.  

But to understand the journey we have to start at the beginning-ish…

In 2014, Bonial was a mature startup with seven or eight years under its belt.  We had a very successful mobile and web app being used in a dozen or so countries.  The product development crew was organized into four scrum teams, an ops team and a design team and was responsible for developing all of the user facing systems as well as the critical business systems.  All-in-all, there were 40-50 people working together in product development.

Unfortunately the team was less effective than it could and should have been, in large part due to lack of clarity and governance.  For starters, not only was there a lack of a coherent roadmap, there wasn’t even any clear record of what work was currently being executed.  We had tickets in Jira scattered across a dozen or more “projects,” Trello boards, stickies on blackboards, and whole lot of ideas in people’s heads, but there was no one place a stakeholder could go and get a simple answer to the question: “what is the status of my project?”

What roadmap planning was done happened in a bi-weekly session called the “roadmap committee.”  This was a group of senior managers from the extended product development organization and stakeholders who reviewed development progress and made decisions on new initiatives.  I’m being nice when I say that it wasn’t much fun.  The selection of initiatives being governed was somewhat arbitrary and the value provided by the committee was questionable.  We often hashed over the same questions over and over again.  Unfortunately it was the only vehicle in place to provide some level of two-way communications regarding roadmap and status.

The end result was that no-one was happy.  The stakeholders and customers felt like their needs were ignored and that, when their projects were accepted, delivery was too slow.  The engineers felt like they were in a blender of arbitrary and incoherent requirements over which they felt no sense of ownership.  And the product management team was stuck in the middle, working to adjust to the latest change and managing both unhappy stakeholders and engineers both.  The end result was perceived and real low performance and sense that we were set up to fail.

So we decided to change this; the solution would require a great deal of work in many areas across the people/process/technology spectrums.  It all came together, though, in planning.  Stay tuned for part 2. 

The Micro-service Conundrum

 

Micro-services have been the rage in software circles over the past couple of years.  A natural evolution of service oriented architectures (SOA), and popularized by successful implementations at companies like Spotify, Soundcloud and many others, micro-services have become the “must have gadget this holiday season”: if you aren’t doing them, you must be doing something wrong.  

But is that true?  As much as people (and especially engineers) love black and white, the answer here is a firm “maybe.”  Here are some of the positives and negatives from one CTO’s perspective.

On the plus side, micro-service architectures provide an excellent canvas for rapid development and continuous integration.  Hard dependencies are minimized, business logic is localized, and the resulting services are typically cloud ready.  Developers tend to like micro-services because it allows for a great deal of independence.  It’s hard to understate the potential pain savings and optimizations – people, process and technology – that can be driven by moving to this type of architecture.

But it doesn’t come for free.  For starters, you’ll likely have a lot more moving pieces in terms of individual components and running executables.  A few weeks ago I wrote a post on the architectural heuristic: Simplify Simplify Simplify in which I posited that simple is better when it comes to minimizing TCO.  In that vein, one must ask if micro-services follow the rule.  Yes, each individual service itself is simpler than a bloated monolith as a result of the small size and tight boundaries.  But the total business logic in your enterprise hasn’t changed, and now you may have hundreds or thousands of additional code modules to manage and executables to orchestrate.  The good news is that cloud hosting providers like AWS provide an ever increasing set of tools to help with managing micro-service architectures (e.g. Lambda, Container Services), but it still requires a good deal of cultural and process change.

Another side effect of the proliferation of executables is potential increase in cost – many hosting providers and software vendors (e.g. APM providers) still price based on number of processes or agents.  If you take the same processing load and 10X the number of running processes, you might find yourself in a world of hurt pretty quickly.

Finally, in moving to micro-services, you’ll find yourself needing to address a host of new challenges that you may not have had to previously – service discovery, versioning, transactions and eventual consistency, event tracing, security, etc.  At a minimum, the upside benefits you’ll realize will be offset by developing competency and code to solve those new challenges.

So, what does this mean for the typical company.  If you have applications that are bloated monoliths, those are fantastic candidates for breaking down into smaller components or micro-services.  On the other hand, if you have a reasonably well architected system with decent boundaries in place already, I’d carefully weight the cost-benefits – maybe run a few trials projects to get a better sense of how it would fit into your platform.  Just realize that in many ways you’re “squeezing the balloon” – trading one set of problems for another.  So long as you’re happier with the new problems (and the corresponding benefits), you win.

In closing, whether you move to micro-services or not, I do think there are great lessons to be learned from applying the discipline required by micro-services – namely, enforcing clear boundaries around business logic and using “API thinking” to service a variety of clients.  I wonder if there isn’t a compromise to be had in which one uses the principles for developing and organizing the code, but you still deploy in a more constrained manner – “Code Micro, Deploy Macro.”  But that’s a discussion for another time. 

Getting Extreme

 

In my previous post on Extreme Ownership I shared that I wished more technology companies would take the principles more seriously.  Over the last month my wish was granted right here at my company.  

Our executive team had an offsite strategy meeting last week, and one of the coolest things we did was take a deep dive into Extreme Ownership.  In the weeks leading up to the summit each member of the team – managing directors, senior execs and CxOs – read Extreme Ownership and prepared homework consisting of an introspective look into how they’d individually violated or been challenged by the principles as well as which principles we wanted to focus on bringing more into the company.

We discussed our experiences over dinner in a very candid fashion.  Each person shared one or two “fails” that tracked back to the principles, or challenges that could better have been solved by better applying the principles.  It’s not often that very skilled and accomplished senior executives are willing to admit to failures in front of their peers, so I think that says a lot about the character of those around the table as well as their commitment to Extreme Ownership.

Some of the maxims that resonated strongly and were repeatedly mentioned:

  • “There are no bad boats, only bad leaders” – the core idea here is that you have to look first at the immediate leader before blaming the team itself for underperformance.
  • “It’s not what you preach, it’s what you tolerate” – how true.  How brutal true.
  • “Check the ego” – as the authors note, “egos cloud and disrupt everything.”  If you don’t have the discipline to keep your ego in check you don’t deserve the trust and confidence of the people you lead.
  • “They don’t want me to fail” – how many times do we assume that a boss or outside organization is to purposely make our lives harder when they put an obstacle in our way?  Probably quite a bit.  And how often is that true?  Likely very seldom.  If we’d drop the assumption of hostile intent and the resulting “us vs them” attitude, business and life would be a lot easier.

One of the longer and more challenging discussions was around how to move to “Decentralized Command” – let’s face it, it’s not easy to step back and let others take charge of executing a mission that you’re accountable for.  But it must be done to scale the organization and to develop the next generation of leaders.  And guess what – sometime they will fail, and you’ll still own the result.  Our COO made a key point here – while failure in the SEALs often results in injury or death, a business fail will have much, much lighter consequences, so we need to take an objective look at the real risk and balance with the cost of not decentralizing. 

As a team we decided on three of the principles we’d like to focus on for the entire organization and each of us was assigned a buddy from within the group to challenge us grow in these areas.  

I was really energized by this process and I’d recommend it to any team that wants to move in this direction.  In hindsight, I recognize that our company already has a pretty solid accountable culture and a general lack of fear, which probably made this a lot easier; some teams will have to overcome much bigger culture and ego challenges.  Which, in the end, means it’s even more vital.