Saturday, March 25, 2017

Adapting the retrospective format for a struggling team

My Spring Game Studio team has been having a bit of trouble hitting stride. We just finished our fourth sprint on Friday, but it was the fourth failed sprint in a row, with fundamental work incomplete and the executable release inadequate for end-user testing. I saw this coming late Thursday night, when I had a few minutes to manually test the build, and so the failure during the Sprint Review meeting didn't surprise me as it did some of the team members. It may be worth noting that I did post to Slack right away that I had discovered a defect, but several team members didn't see the message. In any case, I kept turning this over in my head as I tried to sleep Thursday night. I don't remember the last time one of my teams had four failed sprints in a row: what should I do to help them come together? My default Sprint Retrospective format involves distributing sticky notes on which students write and then post submissions to these four questions: What did we do well? What did we learn? What should we do differently? What still puzzles us? That format has been generally useful, and it has been specifically useful at helping me design interventions around specific needs. Given that the team has had three previous retrospectives, where the same issues kept coming up but were never really addressed, I began to think that a change in retrospective format was in order.

In the morning, I flipped through Patrick Kua's The Retrospective Handbook for inspiration. I think I had come across a reference to this book on Martin Fowler's bliki, but it had been some time since I read it. Kua describes a format called "Solution-Focused Goal-Driven Retrospectives," which he credits to Jason Yip—in fact, Kua's presentations is just a copy of Yip's blog post, so if you read that, you've got the idea. Two things struck me about this format, making me think that it would be useful for my purposes. First, by starting with "the miracle question," you can help the team think about observable properties of desirable end states. This does seem like it would lead to the identification of shared goals better than my traditional retrospective format. Second, it still results in measurable actions to do in the coming iteration to incrementally improve.

I rolled this out on Friday right after our Sprint Review meeting. We primed with the Retrospective Prime Directive, and I pointed out—honestly, hopefully not too judgmentally—that the team had four failed sprints in a row, and that I was changing the retrospective format to help them identify shared goals. I wrote the Miracle Question on the board: "Imagine that a miracle occurred and all our problems have been solved. How could you tell? What would be different?" There was a palpable sense of surprise and shock in the room. A few students gasped, some said things like, "Whoah... I don't know" as they started turning the question over in their minds. One even claimed that he did not like the question, but this was clearly said in way that indicated he did like the question: he didn't like the fact that it was so hard for him to answer!

When I use my traditional retrospective approach, I invite the students to organically form clusters as they post their notes, but the clustering is usually pretty loose. Themes that are distributed across multiple columns cannot be clustered at all. Most students get up, post their notes, then sit down, so it's really just the last few who are making clusters. For this exercise, we moved all the tables back so that everyone would have room to reach the board at once. Three or four students still retreated to the small gap behind the table, but I called them out on this and made them join the group at the front—and I'm glad I did, despite a little whinging from them. As they formed clusters around goals, I asked them to articulate the goals. This also was much more active than my usual approach, with students passing markers around, dividing big clusters, and revising each others' articulations of the goals.
The finished board with 13 shared goals
When I asked them to rank the team on a 0-10 scale in terms of meeting these goals, the first estimates were in the 4-5 range. Then someone suggested the team should look at how many of the goals they have actually met, and the estimates dropped to 2-3. It was an interesting moment of realization, that the team really did want to meet these goals, but they recognized that they still had a lot of trouble. This set up the third step of the retrospective perfectly: to bring the team back up by pointing out that 3 is still not zero! We listed off the practices we were following that were leading us toward our goals, with a team member serving as scribe on a side board.

This led nicely into a discussion of what specific practices we wanted to adopt to take our 3 to a 4 over the next short sprint. Most of the suggestions were clear and came to quick consensus. There was one that had some contention, though, as we tried to sort out the root of the problem. It started with a suggestion to clarify the conditions of satisfaction on the user stories, but when I looked at the conditions of satisfaction, I couldn't see that they were unclear. Of course, I acknowledged that I had written them, and so the root problem could have been that I was assuming domain knowledge that the team didn't have. The discussion raised a bigger problem, though, which I think was the real root: team members had focused myopically on completing the tasks that we had identified during sprint planning, but they never actually went back and read the story name and conditions of satisfaction as they considered validation. Hence, individual tasks were deemed to be validated when considered atomically, in a way that they would never have done if they had been held to the conditions of satisfaction criteria. The result of this was that the tasks were all complete but the story was not satisfied. We never did settle on a concrete action item to solve this, although we agreed that the discussion would make us more sensitive to the articulation of both tasks and conditions of satisfaction in the coming planning meeting. As I look back on it, this may have been a good opportunity to deploy Five Whys to try to get to root causes, but the truth is we were also fighting the clock at this point.

I think this format helped my team to articulate and discuss critical team issues that they had not been confronting before. Whether or not it makes an observable impact in the coming sprint will have to wait for a future blog post. I will need to think about adding this format to my tool belt. I am not sure if I want to replace it as my go-to structure, but I would like to try deploying it earlier with a team to see if it helps with the identification of shared goals.

No comments:

Post a Comment