Accelerated Velocity: Creating an Architectural Runway

Note: this article is part 8 of a series called Accelerated Velocity.  This part can be read stand-alone, but I recommend that you read the earlier parts so as to have the overall context.

Most startups are, by necessity and by design, minimalistic when it comes to feature development.  They build their delivery stack (web site or API), a few tools needed to manage delivery (control panel, CMS) and then race to market and scramble to meet customer requests.  Long term architecture thinking is often reduced to a few hasty sketches and technical debt mitigation is a luxury buried deep in the “someday” queue. 

At some point success catches up and the tech debt becomes really painful.  Engineers spend crazy amounts of time responding to production issues which they could have used to develop new capabilities.  New features take longer and longer to implement.  The system collapses under new load.  At this point tweaks won’t save the day.  An enterprise architecture strategy and runway is needed.

What is an architecture runway?  In short it’s a foundational set of capabilities aligned to the big picture architecture strategy that enable rapid development of new features.  (SAFe describes it well here.)  In plain english – it’s investing in foundational capabilities so features come faster.

The anchor of the architecture runway is, of course, the architecture itself.   I’m not going to wade into the dogmatic debate about “what is software architecture”; rather, I’ll simply state that a good architecture creates and maintains order and adaptability within a complex system.  The architecture itself should be guided by a strategy and long-term view on how the enterprise architecture will evolve to meet the needs of the business in a changing market and tech-space.   

In developing an architecture strategy and runway, architects should start with the current state. At the very least, create a simple diagram that gives context to everyone on the team as to what pieces and parts are in the system and how they play together.   Once the “as is” architecture is identified and documented, the architects can roll up their sleeves and develop the “to be” picture, identify the gaps between the two states, and then develop a strategy for moving towards the “to be”.  The strategy can be divided into discreet epics / projects, and construction of the runway can begin.

Bonial’s Architecture Runway

Success had caught up to Bonial in 2014.  Given the alternative I think everyone would agree that that’s the right problem to have, but it was a problem none-the-less.  The majority of the software was packaged into a single, huge executable called “Portal3,” which contained all of the business logic for the web sites, mobile APIs, content publishing system and a couple dozen batch jobs.  There were a few ancillary systems for online marketing and some assorted scripts, but they were largely “rogue” projects which didn’t add to the overall enterprise coherence.  While this satisfied the immediate needs and had fueled impressive early growth and business success, it wasn’t ready for the next phase.

One of my first hires at Bonial was Al Villegas, an experienced technologist who I asked to focus on enterprise architecture.  He was a great fit as he had the right mix of broad systems perspective and a roll-up-his-sleeves / lead-from-the-front mentality.  He and I collaborated on big-picture “as-is” and “to-be” diagrams that highlighted the full spectrum of enterprise domains and showed clearly where we needed to invest going forward.   Fortunately we version and save the diagrams, so here are the originals:

Original 2014 “As Is” High Level Enterprise Architecture
Original “To Be” 2015 High Level Enterprise Architecture

These pictures served several purposes: (1) they gave us an anchor point for defining and prioritizing long-term platform initiatives, (2) they let us identify the domains that were misaligned, underserved or needed the most work, and (3) they gave every engineer additional context as they developed their solutions on a day-to-day basis.

Then the hard work started.  We would have loved to do everything at once, but given the realities of resource constraints and business imperatives we had to prioritize which runways to develop first.  As described in other articles of this series, we focussed early on our monitoring frameworks and breaking up the monolith.  In parallel we also started a multi-phase, long-term initiative to overhaul our tracking architecture and data pipelines.  Later we moved our software and data platforms to AWS in phases and adopted relevant AWS IaaS and SaaS capabilities, often modifying or greatly simplifying elements of the architecture in the process.  Across the span of this period, we continually refined and improved our APIs, moving to a REST-based, event-driven micro-services model from the dedicated/custom approach previously used. We also invested in an SDLC runway, building tools on top of the already mature devops capabilities to further accelerate the development process. 

The end result is a massive acceleration effect.  For example, we recently implemented a first release of a complex new feature involving sophisticated machine-learning personalization algorithms, new APIs and major UI changes across iOS, Android and web.  The implementation phase was knocked out in a couple of sprints.  How?  In part because the cross-functional team had available a rich toolbox of capabilities that had been laid down as part of the architecture runway: REST APIs, a flexible new content publishing system, a massive data-lake with realtime streaming, a powerful SDLC / staging system that made spinning up new production systems easy, etc.  The absence any of these capabilities would have added immensely to the timeline.

The architecture continues to evolve.  We’ve recently added realtime machine learning and AI capabilities as well as integrations with a number of external partners, both of which have extended the architecture and brought both new capabilities and new (and welcome) challenges.  We are continually updating the “as is” picture, adapting the architecture strategy to match the needs of the business, and investing into new runway.

And the cycle continues.

Closing Thoughts

  • Companies should start with a simple single solution – that’s fine, it’s important to live to fight another day.  But eventually you’ll need a defined architecture and runway.
  • Start with a “big picture” to give everyone context and drill down from there.
  • Don’t forget the business systems: sales force automation, order management, CRM, billing, etc.  As much as everyone likes to focus on product delivery, it’s the enterprise systems that run the business.
  • Create a long-term architectural vision to help guide the big, long-term investments.

Accelerated Velocity: Enabling Independant Action

Note: this article is part 6 of a series called Accelerated Velocity.  This part can be read stand-alone, but I recommend that you read the earlier parts so as to have the overall context.

Inefficiency drives me crazy.  Its like fingernails on a chalkboard.  When I’m the victim of an inefficient process, I can’t help but stew on the opportunity costs and become increasingly annoyed.  This sadly means I’m quite often annoyed since inefficiency seems to be the natural rest state for most processes.

There are lots of reasons why inefficiency is the norm, but in general they fall into one of the following categories:

1) Poor process design

2) Poor process execution

3) Entropy and chance

4) External dependencies

The good news in software development is that Lean/agile best practices and reference implementations cover process design (#1).  Process execution (#2) can likewise be helped by hiring great people and following agile best practices.  Entropy (#3) can’t, by definition, be eliminated but the effects can be mitigated by addressing the others effectively.

Which leaves us with the bane of efficient processes and operations: dependencies (#4). 

Simply put, a dependency is anything that needs to happen outside of the process/project in question in order for the process/project to proceed or complete.  For example, a software project team may require an API from another team before it can finish its feature.  Likewise a release may require certification by an external QA team before going to production.  In both cases, the external dependency is the point where the process will likely get stuck or become a bottleneck, often with ripple effects extending throughout the system.  The more dependencies, the more chances for disruption and delay.

So how does one reduce the impact of dependencies?

The simplest way is to remove the dependencies altogether.  Start by forming teams that are self-contained, aligned behind the same mission, and ideally report to the same overall boss.  Take, for example, the age-old divisions between product, development, QA, and operations.  If these four groups report to different managers with different agendas, then the only reasonable outcome will be pain.  So make it go away!  Put them all on the same team. Get them focussed on the same goals.  Give them all a stake in the overall success.

Second, distribute decision making and control.  Any central governance committee will be a chokepoint, and should only exist when (a) having a chokepoint is the goal, or (b) when the stakes are so high that there are literally no other options.  Otherwise push decision-making into the teams so that there is no wait time for decisions.  Senior management should provide overall strategic guidance and the teams should make tactical decisions.  (SAFe describes it well here.)

In 2014, Bonial carried a heavy burden of technical and organization dependencies and the result was near gridlock. 

At the time, engineering was divided into five teams (four development teams and one ops team), and each team had integrated QA and supporting ops.  So far, so good.  Unfortunately, the chokepoints in governance and the technical restrictions imposed by a shared, monolithic code-base effectively minimized independent action for most of the teams, resulting in one, large, inter-connected mega-team.

There was a mechanism known as “the roadmap committee” which was nominally responsible for product governance, but in practice it had little to do with roadmap and more to do with selective project oversight.  One of the roadmap committee policies held that nothing larger than a couple of days was technically allowed to be done without a blessing from this committee, so even relatively minor items languished in queues waiting for upcoming committee meetings.   

What little did make it through the committee ran directly into the buzzsaw of the monolith.  Nearly all Bonial software logic was embedded in a single large executable called “Portal3”.  Every change to the monolith had to be coordinated with every other team to ensure no breakage.  Every release required a full regression test of every enterprise system, even for small changes was on isolated components.   This resulted in a 3-4 day “release war-room” every two weeks that tied down both ops and the team unfortunate enough to be on duty. 

It was painful.  It was slow.  Everyone hated it.

We started where we had to – on the monolith.  Efforts had been underway for a year or more to gradually move functionality off of the beast, but it became increasingly clear with each passing quarter that the “slow and steady” approach was not going to bear fruit in a timeframe relevant to mere mortals. So our lead architect, Al, and I decided on a brute force approach: we assembled a crack team which took a chainsaw to the codebase, broke it up into reasonably sized components, and then put each component back together. Hats off to the team that executed this project – wading through a spaghetti of code dependencies with the added burden of Grails was no pleasant task.  But in a few months they were done and the benefits were felt immediately.

The breakup of the monolith enabled the different teams to release independently, so we dropped the “integrated release” process and each team tested and released on their own.  The first couple of rounds were rough but we quickly hit our stride.  Overall velocity immediately improved upon removing the massive waste of the dependent codebase and labor-intensive releases.

The breakup of the monolith also untethered the various team roadmaps, so around this time we aligned teams fully behind discreet areas of the business (“value streams” in SAFe parlance). We pushed decision making into the teams/streams, which became largely responsible for the execution of their roadmap with guidance from the executive team.  The “roadmap committee” was disbanded and strategic planning was intensified around the quarterly planning cycle.   It was, and still is, during the planning days each quarter that we identify, review and try to mitigate the major dependencies  between teams.  This visibility and awareness across all teams of the dependency risk is critical to managing the roadmap effectively.

Eventually we tried to take it to the next level – integrating online marketing and other go-to-market functions into vertically aligned product teams – but that didn’t go so well.  I’ll save that story for another day.

The breakup of the monolith and distribution of control probably had the biggest positive impact in unleashing the latent velocity of the teams.  The progress was visible.  As each quarter went by, I marveled at how much initiative the teams were showing and how this translated into increased motivation and velocity. 

To be sure, there were bumps and bruises along the way.  Some product and engineering leaders stepped up and some struggled.  Some teams adapted quickly and some resisted.  Several people left the team in part because this setup required far more initiative and ownership than they were comfortable with.  But in fairly short order this became the norm and our teams and leaders today would probably riot if I suggested going back to the old way of doing things.

Some closing thoughts:

  • Organize teams for self-sufficiency and minimal skill dependencies
  • Minimize or eliminate monoliths and shared ownership
  • Keep the interface as simple, generic and flexible as possible when implementing shared systems (e.g. APIs or backend business systems) 
  • Build transparent about dependencies and manage them closely

Accelerated Velocity: How Bonial Got Really Fast at Building Software

My boss, Max (Bonial Group’s CEO), and I sat down recently for a “year-in-review” during which we discussed the ups and downs of 2017 as well as goals for the new year.  In wrapping up the conversation, I shared with him my gut feeling that velocity and productivity had improved over the past couple of years and were higher than they’d ever been at Bonial – perhaps as much as double when compared to 2014.  

He asked if I could quantify the change, so on a frigid Sunday a couple of weeks ago I sat down with a mug of hot tea and our development records to see what I could do. We’ve used the same “product roadmap” format since 1Q14 (described here), which meant I could use a “points” type approach to quantify business value delivered during each quarter.  As I was looking for relative change over time and I was consistent in the application, I felt this was a decent proxy for velocity.  

It took me a couple of hours but was well worth the effort.  Once I’d finished scoring and tabulating, I was pleasantly surprised to find that I’d significantly underestimated the improvements we’d made.  Here’s a high level overview of the results:

7X Velocity! Bonial team size, value delivered and productivity over time.

The net-net is that in 1Q 2018 we’ll be delivering ~630% more business value than we delivered in the first quarter of 2014, largely driven by the fact that each person on the team is ~250% more productive.  

Sweet.

The obvious next question: how did we do this?

The short answer is that there is no short answer.  There was no single magic button that we pushed to set us on this path to accelerated velocity; this was a long campaign that started small and grew, eventually spanning people, process, technology and culture.  Over time these learnings, improvements, changes and experiments – some large, some small, some successful, some not – built on each other and eventually created an environment in which the momentum sustained itself.  

Over the next few weeks I’ll summarize the major themes here in this blog for both myself as well as anyone who’s interested.  Along this journey I plan to cover (and will link when available):

  1. Building Leaders
  2. Building Great Teams
  3. Creating Situational Awareness
  4. Providing a Growth Path
  5. Enabling Independent Action
  6. Clarifying Processes and Key Roles
  7. Creating an Architecture Runway
  8. Optimizing the SDLC with DevOps
  9. Getting Uncomfortable
  10. Doing the Right Things

Each of those topics could alone make for a small book, but I’ll try to keep the articles short and informative by focussing only on the most important elements.  If there’s an area in which you’d like me to dig deeper, let me know and I’ll see what I can do.  Assuming I get through all of those topics I’ll wrap things up with some final thoughts.

So let’s get started with part 2: Building Leaders

Special Forces Architecture

Architects scanning for serious design flaws

 

I’ve been spending some very enjoyable time recently with our architecture team working through some of the complexities that we’ll be facing in our next planning iteration.  Many of those topics make for interesting posts in their own right, but what I want to discuss in this post is the architecture team itself.  

Why?  Because I’m pretty happy with how we’ve evolved to “do” architecture here.  

And why is that noteworthy?  Because too many of the software architecture teams I’ve worked in, with or around have had operating models that sucked.  In the worst cases, the teams have been “ivory tower” prima donna chokepoints creating pretty diagrams with methodological purity and bestowing them upon the engineering teams. At the other end of the scale, I’ve seen agile organizations run in a purely organic more with little or no architectural influence until they ran up against a tangled knot of incompatible systems and technologies burdened with massive architectural debt.  And everything in between.

So, how do we “do” architecture at Bonial?  I think it helps to start with the big picture, so I brainstormed with Al Vilegas (our chief architect) and we came up with the following ten principals that we think clearly and concisely articulate what’s important when it comes to architecture teams:

  1. Architects/teams should think strategically but operate tactically.  They should think about future requirements and ensure there is a reasonable path to get there.  On the flip side, only just enough should be developed iteratively to meet the current requirements while leaving a reasonable runway. 
  2. Architects/teams should have deep domain expertise, deep technical expertise, and deep business context.   Yes, that’s a lot, but without all three it’s difficult to give smart guidance in the face of ambiguity – which is where architects need to shine.  It takes time to earn this experience and the battle scars that come with it; as such, I generally call BS when I hear about “architects” with only a few years of experience.
  3. Architects must be team players.  They should be confident but humble.  Arrogance has no place in architecture.  They should recognize that the engineering teams are their customers, not their servants and approach problems with a service-oriented mindset.  They should listen a lot.  
  4. Architects/teams should be flexible.  Because of their skills and potential for impact, they’ll be assigned to the most important and toughest projects, and those change on a regular basis.  
  5. Architects/teams should be independent and entrepreneurial.  They should always be on the lookout for and seize opportunities to add value.  They shouldn’t need much daily or weekly guidance outside of the mission goals and the existing/target enterprise architecture.  They should ask lots of questions and keep their finger on the pulse of the flow of technical information.
  6. Architects must practice Extreme OwnershipThe should embrace accountability for the end result and expect to be involved in projects from start to finish. This means more often than not that they will operate as part of the specific team for the duration of the project.  They may also assist with the implementation, especially the most complex or most strategic elements.  “You architect it, you own it.”
  7. Architects/teams should be solid communicators.  They need to be able, through words, pictures and sometimes code, to communicate complex concepts in a manner that is understood by technical and non-technical people alike.  
  8. Architects/teams should be practical.  They need to be pragmatic and put the needs of the business above technical elegance or individual taste.  “Done is better than perfect.”
  9. Architects/teams should be mentors.  They should embrace the fact that they are not only building systems but also the next generation of senior engineers and architects.  
  10. Architects/teams must earn credibility and demonstrate influence.  An architect that has no impact is in the wrong role.  By doing the above this should come naturally.

If you take the principles above and squint a little bit, you’ll see more than a few analogs to how military special forces teams structure themselves and operate, as illustrated below: 

High-performing Architecture Teams Military Special Forces Teams
Small Small
Technical experts and domain specialists Military experts and domain specialists
Extensive experience, gained by years of practice implementing, fixing and maintaining complex systems Extensive experience, gained by years of learning through intense training and combat
Flexibly re-structure according to the mission Flexibly re-structure according to the mission
High degree of autonomy under the canopy of the business goals and enterprise architecture High degree of autonomy under the canopy of the mission and rules of engagement
Often join other teams to lead, support, mentor and/or be a force multiplier Often embed with other units to lead, support, mentor and/or be a force multiplier
Accountable for the end results Accountable for the mission success

Hence the nickname I coined for this model (and the title of this post): “Special Forces Architecture.”  

How does this work in practice?  

At Bonial, our 120 person engineering team has two people with “architect” titles, but another half dozen or so that are working in architecture roles and are considered part of the “architecture team.“  An even broader set of people, primarily senior engineers, regularly attend a weekly “architecture board” where we share plans and communicate changes to the architecture, generally on a weekly basis.  We recognize that almost everyone has a hand in developing and executing our architectural runway, so context is critical.  To paraphrase Forest Gump: “Architect is as architect does,” so we try to be as expansive and inclusive as possible in communicating.

The members of the architecture team are usually attached to other teams to support major initiatives, but we re-assess this on a quarterly basis to make sure the right support is provided in the right areas.  In some cases, the architecture team itself self-organizes into a project team to develop a complex framework or evaluate a critical new capability.  

Obviously there’s a lot going on – we typically have 8-10 primary work streams each with multiple projects – so the core architecture team can’t be closely involved with every project.  To manage the focus, we use a scoring system from 1-5 (1 = aware, 3 = consulting, 5 = leading) for what level of support or involvement is provided to each team or initiative.  In all cases, the architects need to ensure there’s a “big picture” available (including runway) and communicated to all of the engineers responsible for implementing.

For example, right now we have team members embedded in our critical new content management and publishing platform and our Kraken data platform.  We have one person working on a design and PoC updating the core user model.  Several members of the team are also designing and prototyping a new framework for managing machine learning algorithm lifecycles and testing.  And a few people have individual assignments to research or prepare for future runway topics.  In this way we expect to stay just far enough in front of the rest of the team to create runway without wasting time on phantom requirements.

Is this model perfect?  No.  But perfection isn’t our goal, so we optimize for the principles above – adaptability, autonomy, expertise, ownership, impact – and build around that.  Under this model, the Bonial platform has gone from a somewhat organic collection of monolithic apps running on a few dozen servers to a coherent set of domains consisting of APIs, micro-services, legacy systems and complex data stores running on hundreds of cloud instances across multiple continents.  I have my doubts that this would have happened in some of the more traditional models. 

I’m happy to answer questions about this model and talk about the good and the bad.  I’d also love to hear from you – what models have worked well in your experience?

The Micro-service Conundrum

 

Micro-services have been the rage in software circles over the past couple of years.  A natural evolution of service oriented architectures (SOA), and popularized by successful implementations at companies like Spotify, Soundcloud and many others, micro-services have become the “must have gadget this holiday season”: if you aren’t doing them, you must be doing something wrong.  

But is that true?  As much as people (and especially engineers) love black and white, the answer here is a firm “maybe.”  Here are some of the positives and negatives from one CTO’s perspective.

On the plus side, micro-service architectures provide an excellent canvas for rapid development and continuous integration.  Hard dependencies are minimized, business logic is localized, and the resulting services are typically cloud ready.  Developers tend to like micro-services because it allows for a great deal of independence.  It’s hard to understate the potential pain savings and optimizations – people, process and technology – that can be driven by moving to this type of architecture.

But it doesn’t come for free.  For starters, you’ll likely have a lot more moving pieces in terms of individual components and running executables.  A few weeks ago I wrote a post on the architectural heuristic: Simplify Simplify Simplify in which I posited that simple is better when it comes to minimizing TCO.  In that vein, one must ask if micro-services follow the rule.  Yes, each individual service itself is simpler than a bloated monolith as a result of the small size and tight boundaries.  But the total business logic in your enterprise hasn’t changed, and now you may have hundreds or thousands of additional code modules to manage and executables to orchestrate.  The good news is that cloud hosting providers like AWS provide an ever increasing set of tools to help with managing micro-service architectures (e.g. Lambda, Container Services), but it still requires a good deal of cultural and process change.

Another side effect of the proliferation of executables is potential increase in cost – many hosting providers and software vendors (e.g. APM providers) still price based on number of processes or agents.  If you take the same processing load and 10X the number of running processes, you might find yourself in a world of hurt pretty quickly.

Finally, in moving to micro-services, you’ll find yourself needing to address a host of new challenges that you may not have had to previously – service discovery, versioning, transactions and eventual consistency, event tracing, security, etc.  At a minimum, the upside benefits you’ll realize will be offset by developing competency and code to solve those new challenges.

So, what does this mean for the typical company.  If you have applications that are bloated monoliths, those are fantastic candidates for breaking down into smaller components or micro-services.  On the other hand, if you have a reasonably well architected system with decent boundaries in place already, I’d carefully weight the cost-benefits – maybe run a few trials projects to get a better sense of how it would fit into your platform.  Just realize that in many ways you’re “squeezing the balloon” – trading one set of problems for another.  So long as you’re happier with the new problems (and the corresponding benefits), you win.

In closing, whether you move to micro-services or not, I do think there are great lessons to be learned from applying the discipline required by micro-services – namely, enforcing clear boundaries around business logic and using “API thinking” to service a variety of clients.  I wonder if there isn’t a compromise to be had in which one uses the principles for developing and organizing the code, but you still deploy in a more constrained manner – “Code Micro, Deploy Macro.”  But that’s a discussion for another time. 

Simplify Simplify Simplify

Near the end of my first year in my first development job at Andersen Consulting (now Accenture) I was handed an exciting new project: to develop a Monitoring And Reporting Server for a major bank’s financial systems.  And to do it in a few weeks.

This was the mid-1990’s – in the very early days of the web and long before the ELK stack and other tools that would have made this pretty straightforward.  In addition, we were developing this with C/C++ and had to write it all from scratch as open source was still in its infancy and Java and C# were still to come.  

But what software engineer doesn’t like these kinds of challenges?  I was feeling pretty good about myself – I’d had a couple of solid wins up to this point, so in some ways this project was a nod to my success so far and a challenge to step up from pure coding to design and architecture.

Under the tutelage of my supervisor at the time I sank my teeth into it.  Given the challenges – no open source, unforgiving programming environments, limited compute and memory, limited time, etc. – you’d think I’d try to keep it simple.  Right.  On paper (yes, we still used paper) I created an incredibly sophisticated design with input channels (“Translators” in design pattern parlance, but even design patterns were in their infancy), normalization layers, storage mechanisms, and “bots” to look for anomalies.  It had self monitoring and auditing and adapters for new pluggable streams.  For good measure I designed it all to interoperate with other systems and technologies using a (complex and slow) CORBA ORB.  I abstracted everything just in case some future unknown requirement might require extensions or adaptions.  I was very proud of it.

It was never completed.  

Thank goodness.  While I was disappointed at the time, I realize now that this creation was designed to be a total monstrosity and likely a failure.  Soon into the coding I was already bogged down with massive issues in keeping track of the (hand written) thread pools as well as challenges managing the complexity of the modular system builds.  Had it been completed it would have taken a team of rocket scientists to understand and maintain what was, in the end, a pretty simple concept.  Fortunately another project came along with higher priority and I was put on that.  (Plus, though it was never said, I think my supervisor knew deep down that my approach was leading us to a bad place.)

What went wrong?  Easy – I’d violated the first rule of software architecture: SIMPLIFY SIMPLIFY SIMPLIFY. 

Duh!  Right?  Anyone in the field for more than a few seasons knows that more complexity leads to higher risk, more bugs, longer development times and greater total cost of ownership (TCO).  But we do it anyway.  The wisdom gets lost in the excitement of creating something big and beautiful.

It happens all the time. Even with all of the “bumpers” to keep us honest these days – solid design patterns built into many of the open source frameworks, Lean software practices that naturally select for simplicity – I still see it violated on a regular basis.  I spent several years consulting for large corporations, largely fixing violations to this rule.  Even today some of the biggest challenges my team faces at Bonial come from systems and projects that were simply too complex for the work they did.  

So, how do you implement this principle in practice?  There’s no magic – you just challenge yourself or, better yet, have the team challenge you.  When you’ve come up with your approach or design, ask yourself, “Ok, now how can I make this 30% simpler?”  Look for over-design especially abstractions and facades that aren’t really needed or could be added later.  Challenge the need for all concurrency implementations especially when app servers are in play.  Look hard at the sheer number of moving pieces and ask whether one object / module / server will suffice?  And always look for software, best practices and frameworks that already do what you’re trying to do – the simplest thing is not having to do it at all.