Friday, February 12, 2016

Spring 2016 Studio: Story Maps vs Scrum Board

I mentioned in my last post that I decided to experiment with story mapping as an alternative to Scrum-style task tracking boards. My team had their first release on Wednesday, which led into an excellent playtesting session at Connection Corner. However, the team only was able to make the release due to the herculean effort of a few technical team members, who worked through the day on Tuesday to add missing critical features to the prototype.

It's worth dwelling on that phenomenon itself for a moment. This is part of a three credit-hour course, which means--according to federal standards--that we should expect nine effort-hours per week from the student during the fifteen weeks of the semester. The team has scheduled meetings for six hours per week, which I set up in order to maximize face-to-face time for this multidisciplinary team. At least one student put in about nine hours on Tuesday alone to help us meet this deadline. He didn't have to do this, in the sense that there would be no punitive grade assigned, and the shame at not having met our goal would have been shared by the entire team. Worse, we would not have been able to conduct our playtesting session, which would mean we would still be grappling with fundamental uncertainties today. On one hand, we need to congratulate the students who put in significant extra effort to help the team meat its goals (and indeed we gave a round of applause on Wednesday); with the other hand, however, we need to admonish ourselves for putting us into a situation where so much work was required before an important deadline. How did we get to that point? It seems there are only two options: either it was unclear to team members that we were headed toward missing the goal, or they did not care that we were going to miss the goal. The behavior of this subset of students who put in extra hours leads me to conclude the former.

On Monday, many stories remained on the spine of the story map, all or nearly all with student names on them as responsible parties. On Wednesday--the day of the deadline--I wrote a note on the board in the morning to ensure that the story map was up to date for our review & retrospective meeting, and even then, the story map was untouched.

Every student team goes through some growing pains where they realize that they need to be more careful about communication and planning. The retrospective meeting on Wednesday confirmed that the students were aware of the need to coordinate more prudently and tactically. I am left wondering, however, how much of this is related to the story map? I could look at the story map and tell that we were not making progress, as notes were not moving, but I think they were blind to this. I suppose this iteration will demonstrate to us whether they will hold themselves more accountable to the practice or not. During the retrospective, a student contributed what I think will be a valuable action item: immediately after the stand-up meeting, gather around the story map to review where we are. Today was a planning meeting for the next iteration, and so it will take until Monday to get any sense of whether this helps or not.

Another item that gives me some unease is the tracking of stories (in a story map) vs tasks (with a conventional Scrum board). User stories are necessarily brief, but to address their need requires a decomposition of the story into discrete tasks. Using Scrum, this was done explicitly during the Sprint Planning meeting. For example, given a story like "As a player, I want to create my character" might break down into a design and an implement task, to make it clear that we should figure out a good user experience before programming it. However, in the last iteration, we saw several UI components that were thrown together by programmers--surely with good intention, but just as surely without any background knowledge in usability and game interface design principles! As above, there's too much ambiguity to drive a particular conclusion: did the students not know that they should step back and work with the team to design a good UI, or were they just working quick-and-dirty on a two-week prototype? The truth may be somewhere in the middle, and I suppose I can hope that if it is purely ignorance that this two-week prototype was a learning experience about the importance of critical design!

This leads me to one last point that I want to record here, so I don't forget. This two-week prototype was "slapped together" intentionally. The team agreed on several methodological rules governing source code, but they also agreed that these rules would not hold for the two-week prototype: this could be done "quick and dirty." Late in the iteration, I started looking more carefully at the code being committed to the repository, and I was struck by how one developer's "dirty" might be another developer's "abominable." When I said dirty, I meant we could be lax about things like having rigorous unit tests, or allowing several parameters to constructors rather than introducing a builder. Others clearly had a different idea! This strikes me as a fascinating study for software engineering research: how do different developers approach a project when told they can do it "dirty" vs. "clean", and what are the contributing factors in their decisions, and what are the inherent properties of the code they produce? Maybe someone has done this already, but if not, it sounds fascinating---and it clearly resonates with the kinds of discussions I have daily in CS222.

2 comments:

  1. This post has made me wonder if there are pieces of this "Dirty Code" that you may find salvageable. On the flip-side, I am, perhaps morbidly, curious as to which pieces of the prototype you found to be the most disgusting. As "dirty" is still ambiguous, I might as well take this opportunity to learn from specific examples in the prototype.
    Jack McGinnis

    ReplyDelete
    Replies
    1. The final prototype code, overall, is not too bad; this is certainly related to the fact that it ended up looking a lot like my PlayN prototype implementation. There were a few spots in the JavaFX implementation where I was able to point toward improvements in pull requests. I think the worst things that came through are the lack of access control modifiers in classes like IntroductionSplashView, the lack of meaningful abstraction around regions & locations in MapView (falls into the old "stick a '2' on the name instead of using an appropriate abstraction" code smell), and much of PlayerCreationView. These are just artifacts, though---I think the team also recognized a dirtiness to the process, as evidenced by the lack of coordination.

      Delete