Pages

Friday, November 16, 2007

"Done" ness and "done grading" system

The concept of "Done" is not new to "Scrum"ilists. Scrum prescribes that a "Done" check list should be created by the team together with the Product owner. But still the Product owner has the final say in choosing if the iteration is "Done" or "not".

Many people argue against using the "Done" checklist, because they consider this to be non-Agile and also consider this checklist to increase the stress on the team !!!

How is it non-Agile ?
Its because, it goes against the Agile values and principles. Agile methods are based on the fact that project work is unpredictable and they suggest empirical control theory of learn and adapt cycles. Now in case of "done" in Scrum, the product owner(PO) goes through the "Done" checklist, and sets an expectation for the team to achieve certain things by the end of the iteration. If the team is able to deliver "all" the things in the checklist an iteration would be termed success or could easily be termed "failed" iteration.

But don't you think "enforcing" "done" on the iteration is like enforcing the people to think that "everything in a project is predictable".

So, can we abolish "done" checklist and give a free hand to the team to do whatever they want to do ? How do we measure if team is really on the right track ? how do we know if the team is doing things which adds value to project ? How do we measure the quality of the iteration ?

Here is what Jeff Patton wants to say about measuring quality of iteration.

Jeff Patton recommends a grading system for features in an iteration. :
  1. In a small group, brainstorm the major features of your product.
  2. Independently for each feature write your "grade" for the quality of the feature. Answer the following questions: Do you like the feature?; Do you like using it?; and Is it a valuable part of the product? Let your answers help you grade the feature with an A, B, C, or D, or fail it with an F.
  3. When done, discuss your grades with those in your group. Agree on a grade that best represents the group's opinion of the quality of that feature.

After looking at the recommendations of "done" and "grading", I have been thinking of proposing a new model providing the best of both worlds. I am going to call this as "done grading" system as this is a mid way between the above two theories.

Here are the steps I propose:
1. At the beginning of each iteration, the team would sit with the PO and create the "done" checklist. This checklist is created to understand POs expectation for the iteration. I feel having "done" checklist sets the right expectation and provides a clear goal for the team to proceed further. Without a clear goal, the team would still be working but without a common goal.

2. At the end of each iteration, the PO would still go through the "done" checklist, but instead of calling it "yes" or "no", he/she grades the iteration based on completion of tasks.

For ex: if the team has following tasks in their "done" list
  1. delivering A,B,C features
  2. unit tests for all features
  3. regression test cases and testing
  4. automating the tests
  5. Introducing TDD
and, If the team has partially achieved the above goals, then the PO can chose grade as "A", "B" or "C", etc based on his satisfaction. Once the grading is done, and during the Iteration review session, PO would sit with the team to do a root cause analysis for poor grading of tasks. This session, provides rich inputs to the team, and this data could be utilized to improve the grading in the upcoming iterations.

By applying the above mentioned "done grading" system, the team is relieved of "stress" at the end of each iteration . This is because the iteration would be measured on a more collaborative way of "done grading" system rather than the earlier binary "done" system.

No comments: