How to conduct 20, 30, 50 and even 100+ TEAM Retrospectives Using Innovation Games® Online
A few years ago, I was helping with a large scale enterprise Agile Transformation project lead by Applied Frameworks with several hundred engineers spread across multiple locations. Our client was a cloud based infrastructure company who had grown very rapidly both organically and through multiple acquisitions. A key part of Applied Framework’s transformation services is starting every transformation with retrospectives so that we can better understand what all members of the product development function – from developers and quality assurance, to product management and customer care – think about what is going well and what needs to be improved.
Speed Boat is our game of choice. But the size of the organization involved meant that we would be conducting multiple moderately large retrospectives of 30 to 40 people. This was costly, of course, and it took a lot of time, but the enlightened leadership of this company was committed to understanding the perspective of their employees before cramming Agile down their throats (sadly, too many companies are cramming Agile – a topic for another post).
Following my own recommendations on game design, we had several teams playing Speed Boat in the same room. I’ll never forget when one of the developers in the game I was facilitating stated: “You know, Luke, this game is fine – we’ve played Speed Boat before with good results. And if you ask us about our anchors and propellers, we’ll tell you, but honestly, it won’t make much a difference. You see, my company was acquired because we’re a really good Agile shop with a great product. But now that we’re here, we’ve found multiple source code control systems, multiple requirements management systems, at least two corporate content repositories, and different testing frameworks. The difference is that when we were small we could change these things. Now we’re just a few teams out roughly 60 teams. We can’t change key things on our own. If you really want to help us, focus on the enterprise, and map out projects that can affect everyone. And sure, we’ll complain a lot as you help us standardize, but the reality is, we’ve hit a wall on what we can improve as individual teams”.
This developer was right. I called an audible with my co-facilitators, grouped everyone into one big Speed Boat game and focused on inter-team and enterprise impediments. We then repeated this at three other development locations. It took time, and a considerable investment in logistics, but we identified and implemented some key projects such as standardizing on an ALM vendor and a source code management systems. These projects, which did indeed affect the enterprise, took months to implement, with high-impact results.
This was my first Large, Distributed Team Reprospective (LDT/LDTR). We produced it using traditional techniques with direct support from the highest leaders in the company. But it was too costly and took too long. Since conducting this retrospective, I’ve been asking other organizations with LDTs (20, 50, 100, and even 250+ Scrum teams inevitably distributed across multiple time zones) about their experience with retrospectives. The results are not promising: most LDTs are not doing consistently conducting substantive retrospectives (I’ll expand on this later).
If Retrospectives Are So Great, Why Do So Many LDTs Stop Doing Them?
The simple answer is that traditional approaches to retrospectives – assembling a group people in a room with one or more facilitators – are too costly, don’t scale, take far too long and fail to produce high-impact results. As a consequence, large organizations either skip retrospectives entirely or they relegate retrospectives to individual teams, which tragically limit their effectiveness in identifying and implementing enterprise changes that can profoundly improve performance. Over time, because individual teams are not obtaining material benefits from retrospectives, they stop doing them at all.
Since then, we’ve changed our process to use Innovation Games® Online for LDTRs. Switching to online games enables organizations to conduct scalable, efficient, cost-effective, and high-impact retrospectives. Our game of choice is still Speed Boat. The core process is that each individual team plays their own online game at a time convenient to them (usually one or two one hours games is all that is needed). Multiple facilitators reduces biases. The producers of the event then download the results and analyze them to identify patterns of issues that affect the enterprise using advanced analytic techniques. These are shaped into projects and one or more are implemented. The process is faster and considerably lower cost than traditional in-person techniques.
We’ve conducted LDTRs for Agile teams and even for other parts of the organization, such as sales. Given how much Agile has scaled over the past few years, I thought it was time to share our experiences and provide a playbook for others who want to use this process in their organization. In this post I’ll briefly review the motivations for retrospectives, review the challenges of existing techniques, and then present our proven process for conducting LDTRs. I’ll draw examples from several LDTRs we’ve conducted for our clients in sales and software development and present a framework that you can leverage in your organization.
Oh – one final thing. This post is designed for people working in moderately large (10 teams/60+ people) to extremely large development organizations (250 or more teams with thousands of people).
Retrospectives Really Are Great!
Early in my career I had the good fortune to take Gerry Weinberg’s Problem Solving Leadership class. Norm Kerth, author of Project Retrospectives: A Handbook for Team Reviews, was my instructor, and he instilled in us the power of retrospectives.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Since then, we’ve seen increasing wisdom in conducting retrospectives. For example, Certified Collaboration Architect Diana Larsen and her colleague Esther Derby wrote Agile Retrospectives: Making Good Teams Great, a book I find quite useful. And Diana continues to innovate in this space, sharing new retrospective techniques such as Circles and Soup and Anonymous Cards.
I could go on – there are a tremendous number of Agile-related or Agile-inspired retrospective techniques, blogs and books – too many to enumerate there. It suffices to say the Agile community has embraced retrospectives.
But Agile is not the only community to embrace retrospectives. Sales and Marketing Teams use retrospectives to improve their performance in two common ways. First, they conduct internal process retrospectives to identify opportunities for improvement. Second, they regularly conduct Win/Loss analysis to understand how to improve the entire marketing and sales process. You can find a practical guide to Win/Loss Analysis by Steve Johnson and Certified Collaboration Architect Jim Holland here.)
The Retrospective Performance Curve
If retrospectives are so great, why do large organizations stop doing them? To help answer this question, we’ve been asking hundreds of agilists to draw a retrospective impact curve. A composite of many individual curves is presented below. As you can see, there are four distinct phases of retrospectives for individual teams within large distributed teams.
Why Bother?The last stage of retrospectives is when teams stop doing them. Oh sure, they might conduct a retrospective as a token ritual – a means to share beer at the end of the Sprint – but no one takes it seriously. And while we’re all for sharing some Tequila (or beer, as you wish) at the end of a Sprint, we think it might be better to change this curve.
|Early Adoption||In the early adoption phase, teams experience rapid improvements because they are often new to Agile and retrospectives help them fine tune their processes, learn new processes, understand new roles, and identify opportunities for improvement. This is often a time of substantial team-based productivity improvements, as teams focus primarily on their own team dynamics.|
|Team Maturation||As teams mature, the impact of their retrospectives starts to wane: The problems they identify are bigger and harder to fix, often involving coordination with other teams. We start to see a shift in focus from intra-team issues to inter-team issues.|
|Organizational Limits||To address these issues, many teams spontaneously choose to hold bigger, more comprehensive retrospectives, often with collocated teams with whom they collaborate. These retrospectives typically identify inter-team improvements and often some process and technical improvements. As these improvements are made, teams start to lose interest in retrospectives because they are no longer providing material value.|
Summarizing a bit, the root cause of large distributed teams stopping retrospectives is that they bump into the limits of what they can improve. And if they can’t improve, why bother with the retrospective?
As a corollary, note that another root cause of large teams stopping retrospectives is that they conduct them too frequently and therefore too trivially. The Agile Manifesto never said to conduct a retrospective every Sprint!
Challenges With Scaling Traditional Retrospectives
Traditional retrospectives assemble participants in a room for a structured meeting that can last anywhere from 1 hour to as long as 1 day (you can find LOTS of very good advice on how to conduct traditional retrospectives; I won’t repeat that advice here). While getting together a single team for an in-person retrospective is often not more complex than booking a conference room, as the number of participants/teams increases, costs and complexity increase dramatically. We’ve produced in-person retrospectives for 30 people; and other Certified Collaboration Architects such as Henrik Kniberg have produced retrospectives for 65 people (see here).
And while 65 people might seem large, we’ve produced dozens of in-person events that are much larger. We regularly produce Innovation Games® sessions with more than 100 participants; our record is a 500 person event for Intuit, and we’ve explored producing a 2,000 person event for the City of San Jose, CA as part of our Budget Games initiative through our non-profit partner, Every Voice Engaged Foundation.
The two limiting factors for in-person games are costs and logistics. You can play around with a cost estimator later in this post. I’ll focus on various logistics challenges.
|Facilities Complexity||Unlike a conference room or a team room, in-person events involving hundreds of people require special space: ballrooms in hotels or conference / event centers. This causes scheduling and materials complexity, because you have to schedule the retrospective when space is available and you might need special materials.|
|Scheduling Complexity||Managing the travel of the people involved LDT is complex – multiple flights, hotels, food. Ick. And if the retrospective really is HUGE (say, a 600 person development organization organized in 80 teams) you’re going to have to plan your event carefully to ensure you’ve got the right space (see the previous row!).|
|Agenda Complexity||Larger events require more carefully planned, more strictly controlled agendas: It is pretty trivial to swap out one retrospective activity for another when you’ve got one team– really, hard if you’ve got even more than 5 teams, and effectively impossible when you’ve got 20 or more.|
|Staffing Complexity||Great retrospectives share many qualities of great qualitative market research: the facilitators / moderators ensure positive outcomes with minimal bias. Like other components of “largeness,” as the number of facilitators increase, the effort to coordinate them also increases. To minimize coordination overhead, it is very helpful that all of the facilitators share knowledge of the same techniques. Add to this additional staff for managing the event.|
|Materials Complexity||Many great in-person retrospective techniques, like Speed Boat, require customized materials and lots of wall space. That’s easy to get for small teams – hard for large teams. And always check with your facility to confirm they’re OK with you taping stuff to the walls – because if they’re not, you’re going to have to rent more gear for your retrospective.|
|Food and “Other” Stuff||More people means increased food costs, etc. And it often means a longer retrospective than truly necessary: if you’re going to invest this much money on a retrospective, you sure better structure the event to consume at least an entire day of time!|
To get a sense of just how much of an investment you might need to make in your large retrospective, you can access our handy calculator in the future.
When the calculator is done we’ll put it here. I promise.
Online Retrospectives Are the Answer
Speed Boat is a well known retrospective technique. Taking it online provides the scale you need. I’ve presented the overview of the process earlier. Here it is again in a handy checklist – with a lot more implementation details.
Step 1: Each team will play Speed Boat online creating a single result.
- Keep each team intact – because at scale, teams are the unit of all organizational engineering.
- Use multiple facilitators to reduce unnecessary facilitator bias and improve results.
- Have them use the same facilitation scripts like this one.
- You can use facilitators within your company or consultants.
Step 2: Results of each team are downloaded into a centralized spreadsheet. This is easy – each facilitator just downloads the results of their games and uploads the results into a common spreadsheet.
Step 3: Results are coded by People / Process / Technology AND by scope of control. Although a large team should be used to facilitate the games, we recommend a small team of 2-3 people be used to code the results for speed and consistency.
[unordered_list style=’circle’ number_type=’circle_number’ animate=’no’ font_weight=”]
- Each item placed into game is coded – if you’re using Speed Boat, this means every anchor and propeller!
- We recommend coding items into a Primary People / Process / Technology and an optional Secondary People / Process / Technology. For example, “My PO doesn’t attend review meetings” could be coded as primarily as People, secondarily as Process, and “We should switch to GitHub,” would likely be coded primarily as Technology.
- The online chat logs are invaluable in identifying underlying issues.
- We then recommend using Diana Larsen’s Circles and Soups taxonomy to assess the perceived degree of control a given team has in addressing any impediments.
- Team: This is an issue that the team should address. For example, a PO not attending review meetings should be handled by the team
- Product / Group: This is an issue that the team can’t address, but is likely the scope of the product or group.
- Enterprise: This is an issue that requires coordinated effort at the enterprise. For example, moving to GitHub is likely to affect all of the teams within the enterprise. As such, it should be carefully assessed as a potential enterprise project and compared with other high-impact projects.
We often do extended analysis to identify various kinds of biases that can creep into the game play. Here are some biases that can affect your results:
[list style=”disc” color=”blue”]
- Positivity Bias is a pervasive tendency for people [teams], especially those with high self-esteem, to rate positive traits as being more true of themselves than negative traits. This can happen when a team is asked to identify Propellers. To catch this, we look for propellers or chat logs with aspirational language, such as “We could do this…”, or prescriptive language, such as “We should do this…”.
- Sampling Bias occurs when a small portion of the organization plays (e.g., 20 out of 60 teams) or only one kind of team is engaged. Your goal should be at least 90% of the teams participating.
- Method or Question Bias can inappropriately guide participants into answering questions. By keeping things open-ended, Speed Boat and other games minimize method and question bias.
We expect producers of LDTRs to take into account any potential biases and to provide assessment of potential biases in their research reports.
Step 4: Results are analyzed to identify patterns. One of the great advantages of digital results are the ability to analyze data using sophisticated tools like R and Qlikview. Here are the results from a 42 team retrospective with nearly 1000 unique anchors and propellers. A “Pod” is a grouping of related products. For those of you who are trying to convince Senior Leaders that an organization wide impediment exists, this kind of visualization of results is invaluable!
Step 5: Patterns are shaped into potential projects. This step can take a week or so – which is a good thing! You’re looking for incredibly high-impact opportunities. Investing time in identifying them will pay incredible dividends.
Step 6: Projects are selected. If there are a small number of projects we just select them. Or we use Buy a Feature for large numbers of projects.
To help you put this playbook into action, the end of this post has links to additional reference materials, sample scripts and post-processing of results.
Case Study: Large “Captive” India Team Retrospective
Our client was a EUR1B technology company with centralized product management and business leadership and three large development centers. One of these was a Captive in India, with 54 teams and approximately 450 people. The client wished to engage their development teams in identifying more substantive and actionable feedback than traditional retrospectives.
We scheduled the teams, trained the facilitators, conducted the retrospectives, processed the results and made our recommendations. We organized our recommendations by Scope into Team and Enterprise; by Area into People/Process and Technology. An example enterprise technology recommendation was increasing the budget for hardware associated with testing and simulating production operations in development. An example enterprise process recommendation was changing certain processes associated with DevOps. Our client is now implementing these items.
Case Study: Cisco Sales Enablement Retrospective
This project as engaged as part of Cisco’s broader ACT program: Accelerated Cisco Transformation, a multi-year program to implement major improvements at Cisco. Cisco wanted to engage a global team of Account Managers, Sales Engineers and Product Sales Specialists to identify sales improvement opportunities.
Conteneo designed and produced a series of online games that engaged hundreds of Cisco’s sales teams. I’m including it here to illustrate that LDTRs can benefit every functional unit within a company – not just software development or product development.
[list style=”disc” color=”blue”]
- 490 unique ideas defining opportunities for improvement in Culture, Process & Technology
- 15 thematic business challenge areas
- 20 tactical projects
- 15 strategic projects
The Cisco sales leadership team subsequently selected and engaged key projects that helped the globally distributed team address many impediments.
To continue to gain the benefits of Agility at Scale, organizations must move beyond traditional, in-person retrospectives focused on making individual teams great, and shift towards low-cost, online retrospectives focused on making organizations great. Innovation Games® Online provides a low-cost, efficient, massively scalable and extremely high-impact collaboration platform upon which organizations product Large Distributed Team Retrospectives.
Here are several resources that will help you in implementing a Large Distributed Team Retrospective in your organization.
|Speed Boat with Propellers and Anchors||This game has 25 propellers to represent positives or things going well and 40 anchors to represent impediments or opportunities for improvements. The game has three regions to capture the “weight” or “badness” of the anchor and two regions to capture the “goodness” of the propeller.|
|Facilitation Script||Here is a sample Facilitation Script that you can download and edit for use in your games. Distribute this to all of your facilitators to help ensure consistency in the process. Note that while players do not have to have an account, every facilitator must have an account so that we can authenticate them.|
|Post-Processing Guide||This is a sample Excel spreadsheet that is reflective of the kind of download you will get from Innovation Games® online. We’ve included helpful examples on how you might want to post-process results.|
|Sample Presentation||Need some help in putting together a presentation to convince your organization you could benefit from a LDTR? This presentation will help you!|