“Eliminating delays between what you do gives you a better return than getting better at what you do.” - Alan Shalloway (Dec 2, 2010, twitter).
I was recently introduced to the ball point game as a way to introduce agile concepts. A colleague of mine had decided to try it out for a larger group so I volunteered one of our teams in order to ‘test drive’ the game and the facilitation of that game.
The game itself is pretty straightforward. If you want to read about it, check out Declan’s link above or one of the many other sites describing the game. One of the objectives of the game is to show how teams can dramatically improve their process just by stopping, reflecting, and re-planning.
This particular team improved from a velocity of 28 to 57 in four iterations (see the image). However, if you read about this game from other sources, this kind of improvement is ok, but not great. The team focused their iteration planning efforts on perfecting their style rather than changing their process. They did discuss some alternative processes but ultimately rejected them and decided together that their best course of action was to perfect their current method.
To make matters worse, as facilitators we were terrible project managers as we told them stories of great improvements by other teams (“I bet you can get to 150, I’ve seen other teams do it”) but all this did was make them frustrated as they continued to try and perfect their process with only small gains. Plus, even though they doubled their efficiency in a short period of time, they were frustrated that they couldn’t go faster and started to slow down at the end of iteration 4. They suggested that if an iteration 5 would have been held, they would only have gone slower as the realization sunk in that they would not be able to achieve 150.
The exercise up until this point did not achieve what we had hoped for. We had hoped that they would find huge jumps in productivity each iteration and that they would get more and more excited each iteration. Instead, they only achieved modest gains and got more frustrated each iteration. So, what did we learn from this?
1. As the quote above says, it confirmed that focusing on improving and perfecting your current process is only going to give you modest gains. Practice doesn’t make perfect if your process is flawed. Continually practicing a bad process isn’t going to result in the benefits you are looking for and will likely slow your teams down over time as apathy builds. We need to give teams the lean tools and techniques to help them find their delays. One of the ‘tricks’ to this game is to reduce or eliminate your delays instead of perfecting your throws.
2. We need to give teams enough time to not only re-plan the next iteration, but to ask them to reflect upon and challenge both their process and their assumed constraints.
3. Teams who are pushed to achieve incredible productivity gains by well meaning leaders under the guise of empowerment or encouragement may see short term gains, but in the long term those teams will likely be negatively affected - especially if the teams haven't been given the tools and techniques to achieve those gains.
The team we tried this with did not at first come away with tools that they could use to improve their current project. In fact, we may have scarred them forever ;) We are trying this game again in a few weeks with a different group and we’ve made some changes with the hope of enabling the teams to see the ‘ahah’ moment before the end of the game. I’ll keep you posted.
Thursday, December 9, 2010
Wednesday, November 24, 2010
Agile Retrospectives - a Rising Patton Fusion
The last session of Agile Vancouver 2010 was a unique opportunity to watch Linda Rising conduct a conference retrospective with the Agile Vancouver organizers. It was interesting to watch how she facilitated and I wrote down some of her techniques so that I could try them out. The following day during the tutorials Jeff Patton led us through a mini retrospective with his own interesting twists based on his story mapping experience. What follows is the fusion of their ideas.
Note: This retrospective works nicely using index cards that can easily be sorted and grouped around a common table. If you are using walls and stickies, you can adapt where required.
Step 1 - Set the tone:
Recite Norm Kerth's Prime Directive (Linda was able to recite this by memory):
"Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand."
After reciting, ask each team member one by one if they agree to uphold this statement during the meeting and to avoid blame. A simple verbal "yes" indicates agreement. The verbal agreement is a simple influencing strategy or pattern that helps set the tone for the retrospective.
Step 2 - Framing the Retrospective:
The facilitator tells a story of a co-worker meeting you in the hallway of your company. The co-worker asks: "I know you were on project [X], how was that?". Each person responds with "It was great because...". Instead of speaking the answers out loud, give each person 3 index cards and have them write their answers down in silence. This allows input from each person regardless of personality type and keeps your team member's answers from influencing your own.
When each person has filled out their 3 cards, they read each one out loud and place them on the table. Once all cards are on the table, ask the group to silently group the answers together. Cards that are similar should be close to one another and cards that are different should be farther apart. According to Patton, the simple reason for doing this in silence is because it allows the work to happen quickly and without much discussion. In our exercise, we found this to be true.
Now that the good things are grouped together, take a different coloured index card and have the group summarize each grouping with a new card. For example, summarizing items like "team worked well together", "Bob collaborated to help me with my task", "Team rallied to complete the stories together" might be summarized with a card called "Great team work".
Step 3 - Do Differently:
Now that we have acknowledged the good things about the last period, remind the team that it is only a perfect project if they would do that project or iteration again in exactly the same way. In reality there is always something we would do differently. Ask the team to silently fill out 3 more index cards with what they would do differently. They may not write cards that associate blame, describe what wrong, or try to problem solve. In this part of the retrospective we are only identifying what we would do differently.
Once everyone has completed their cards, we again read them aloud as we place them on the table, group the cards in silence, and summarize with a different coloured index card. As the cards are read or summarized, the facilitator may need to remind the team to refrain from problem solving or directing blame.
Step 4 - Voting:
The next step is for the team to agree on which items on the "Do Differently" list are the most important. To keep the voting impartial and independent, have each team member write their top 3 items on an index card and hand the cards to the facilitator. The facilitator then tallies the votes and records the totals on the summarized cards. An alternative is to use dot voting, but I've found that dot voting can be 'gamed' too easily and that initial dots influence those who vote later (group think).
Step 5 - Experiments:
Using the top 1 or 2 voted items, ask the group to split into groups to discuss them - 1 group per item. Instruct the groups to discuss small experiments or tweaks that the team could try in the next iteration. If you practice frequent retrospectives, this discussion does not need to be about problem solving or even root cause analysis. The goal is to quickly agree on small experiments to try to resolve the issue you are discussing. For those dedicated to root cause analysis this may be a problem, but give it a try and do what works for your team while avoiding blame or delving deep into problem solving. This part should be time boxed to 10 or 15 minutes maximum.
After deciding on the experiments, have one person from each group present the idea to the group. This idea needs to be included in the backlog for the iteration and the team (not an individual) commits to completing the experiment during the iteration and examining the results.
Step 0 - Review Experiments:
At the beginning of the next retrospective, add a new step at the beginning to discuss your experiments to see how effective they were and use this information as input into future experiments.
Other notes:
If you are doing a retrospective for a larger period of time, then consider starting the retrospective by building a timeline of events. For more info, check out this blog: http://www.thekua.com/rant/2006/03/a-retrospective-timeline/
Thanks Linda and Jeff for sharing your methods and ideas.
Note: This retrospective works nicely using index cards that can easily be sorted and grouped around a common table. If you are using walls and stickies, you can adapt where required.
Step 1 - Set the tone:
Recite Norm Kerth's Prime Directive (Linda was able to recite this by memory):
"Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand."
After reciting, ask each team member one by one if they agree to uphold this statement during the meeting and to avoid blame. A simple verbal "yes" indicates agreement. The verbal agreement is a simple influencing strategy or pattern that helps set the tone for the retrospective.
Step 2 - Framing the Retrospective:
The facilitator tells a story of a co-worker meeting you in the hallway of your company. The co-worker asks: "I know you were on project [X], how was that?". Each person responds with "It was great because...". Instead of speaking the answers out loud, give each person 3 index cards and have them write their answers down in silence. This allows input from each person regardless of personality type and keeps your team member's answers from influencing your own.
When each person has filled out their 3 cards, they read each one out loud and place them on the table. Once all cards are on the table, ask the group to silently group the answers together. Cards that are similar should be close to one another and cards that are different should be farther apart. According to Patton, the simple reason for doing this in silence is because it allows the work to happen quickly and without much discussion. In our exercise, we found this to be true.
Now that the good things are grouped together, take a different coloured index card and have the group summarize each grouping with a new card. For example, summarizing items like "team worked well together", "Bob collaborated to help me with my task", "Team rallied to complete the stories together" might be summarized with a card called "Great team work".
Step 3 - Do Differently:
Now that we have acknowledged the good things about the last period, remind the team that it is only a perfect project if they would do that project or iteration again in exactly the same way. In reality there is always something we would do differently. Ask the team to silently fill out 3 more index cards with what they would do differently. They may not write cards that associate blame, describe what wrong, or try to problem solve. In this part of the retrospective we are only identifying what we would do differently.
Once everyone has completed their cards, we again read them aloud as we place them on the table, group the cards in silence, and summarize with a different coloured index card. As the cards are read or summarized, the facilitator may need to remind the team to refrain from problem solving or directing blame.
Step 4 - Voting:
The next step is for the team to agree on which items on the "Do Differently" list are the most important. To keep the voting impartial and independent, have each team member write their top 3 items on an index card and hand the cards to the facilitator. The facilitator then tallies the votes and records the totals on the summarized cards. An alternative is to use dot voting, but I've found that dot voting can be 'gamed' too easily and that initial dots influence those who vote later (group think).
Step 5 - Experiments:
Using the top 1 or 2 voted items, ask the group to split into groups to discuss them - 1 group per item. Instruct the groups to discuss small experiments or tweaks that the team could try in the next iteration. If you practice frequent retrospectives, this discussion does not need to be about problem solving or even root cause analysis. The goal is to quickly agree on small experiments to try to resolve the issue you are discussing. For those dedicated to root cause analysis this may be a problem, but give it a try and do what works for your team while avoiding blame or delving deep into problem solving. This part should be time boxed to 10 or 15 minutes maximum.
After deciding on the experiments, have one person from each group present the idea to the group. This idea needs to be included in the backlog for the iteration and the team (not an individual) commits to completing the experiment during the iteration and examining the results.
Step 0 - Review Experiments:
At the beginning of the next retrospective, add a new step at the beginning to discuss your experiments to see how effective they were and use this information as input into future experiments.
Other notes:
If you are doing a retrospective for a larger period of time, then consider starting the retrospective by building a timeline of events. For more info, check out this blog: http://www.thekua.com/rant/2006/03/a-retrospective-timeline/
Thanks Linda and Jeff for sharing your methods and ideas.
Friday, November 5, 2010
What is your top 1 agile tip? @AgileVancouver
The agile Vancouver conference wrapped up yesterday - a great Canadian conference if you are wondering where to spend your training budget in 2011. On Wednesday morning we held an open space similar to the agile panel at SDEC. We opened the floor for questions, ranked them, and then spent 10 minutes on each topic. Since the open space was largely filled with speakers and experienced agilists, I asked this question: "What is your top 1 agile tip". Here are our responses with twitter usernames where applicable:
@lucisferre - "Working towards continuous delivery"
@dbelcham - "Be agile w/ agile practices. Adopt what works"
@mikeeedwards - "One step at a time. Find small wins"
unknown - "Adopt pair programming"
Angel from Spain - "Make the change come from them - get them to see the problem and come up with the improvement"
@Ang3lFir3 - "Can't do it without the right people. One bad egg spoils the whole bunch. Get the right people on the bus"
@dwhelan - "Find the bottlneck in your value flow and cut it in half"
@srogalsky - "Uncover better ways. Never stop learning. You are never finished being agile"
@mfeathers - "Don't forget about the code or it will bury you. It will $%#ing bury you"
@robertreppel - "Recognize your knowledge gaps and bring in help if you need it"
@jediwhale - "Pull the caps lock key off your keyboard"
Next time I'm in a panel, the question will be: "I love agile because..." Feel free to comment with your answers.
@lucisferre - "Working towards continuous delivery"
@dbelcham - "Be agile w/ agile practices. Adopt what works"
@mikeeedwards - "One step at a time. Find small wins"
unknown - "Adopt pair programming"
Angel from Spain - "Make the change come from them - get them to see the problem and come up with the improvement"
@Ang3lFir3 - "Can't do it without the right people. One bad egg spoils the whole bunch. Get the right people on the bus"
@dwhelan - "Find the bottlneck in your value flow and cut it in half"
@srogalsky - "Uncover better ways. Never stop learning. You are never finished being agile"
@mfeathers - "Don't forget about the code or it will bury you. It will $%#ing bury you"
@robertreppel - "Recognize your knowledge gaps and bring in help if you need it"
@jediwhale - "Pull the caps lock key off your keyboard"
Next time I'm in a panel, the question will be: "I love agile because..." Feel free to comment with your answers.
Why is collective team ownership and commitment better than individual ownership and commitment?
Recently I've been pondering collective vs. individual ownership and commitment, the theories behind it, and how to respond to someone who many not have considered why collective ownership and commitment is important. If you are involved on a team that is assigning responsibility to individuals, you could respond in several ways. My own impulse may be to respond either with frustration or to smile, nod and wink to my more agile-aligned team members. However, I have never found these types of responses to be very productive ;). You could also respond by informing the team that the agile community is full of luminaries who tell us that individual responsibility is not compatible with good results over the long term. However, as you can imagine, it also won't be an effective strategy just to tell your team that Johanna, Brian, Bob, Esther, James, Jeff, Mary, (etc) and you don't think this is an effective way to manage the work. Instead, I suggest that you a attempt a face to face discussion on the pros and cons of assigning the work to the team vs. the individual. Here are a few things you might use in your discussion.
Team ownership reduces the risk of having or creating one 'smart person in the room' (i.e. bottleneck) who does all the work. Even though Jim may be the best person to complete the job, if Tim and Jane work on it together with Jim it will take a little longer initially to complete the task, but those gains will be realized over the long term as the whole team becomes better at accomplishing each task and filling each role. While a cross functional team isn't always easily or immediately created, eventually that team can function effectively to complete any task even if one or more team members are missing.
Collective ownership should result in less items that are in progress. Work in progress tasks have zero realized value to the organization's goals. If we commit to and own items as a team, we should work hard to get them done one at a time so that we can realize the value of those items sooner. Rather than 5 people completing 5 tasks individually that are finished together at the end of the month, task 1 is finished and creating value at the end of week 1, task 2 at the end of week 2, etc.
Quality has a better chance of being built in from the beginning when a team owns and takes responsibility of a task together. As the team works together on a backlog item, we will collectively discover our 'quality blind spots' earlier in the process and adjust accordingly. When I work on an item myself and then present it to the team for review after I'm finished, there will be more re-work required to incorporate the ideas of my team members in order to mark the item as done. It is critical to get feedback early and often in order to 'fail fast' and improve quality. Teamwork is an effective way to accomplish this.
Finally, collective team ownership promotes... teamwork. Individual accountability and responsibility tend to generate selfish behaviour (I'm working on my task that I'll be measured on so I can't help you with yours). Team accountability and responsibility builds a stronger team because if any one task is sub par it reflects on the whole team and not on an individual.
Of course, this doesn't work very well if we don't sit together - which is why we do.
Team ownership reduces the risk of having or creating one 'smart person in the room' (i.e. bottleneck) who does all the work. Even though Jim may be the best person to complete the job, if Tim and Jane work on it together with Jim it will take a little longer initially to complete the task, but those gains will be realized over the long term as the whole team becomes better at accomplishing each task and filling each role. While a cross functional team isn't always easily or immediately created, eventually that team can function effectively to complete any task even if one or more team members are missing.
Collective ownership should result in less items that are in progress. Work in progress tasks have zero realized value to the organization's goals. If we commit to and own items as a team, we should work hard to get them done one at a time so that we can realize the value of those items sooner. Rather than 5 people completing 5 tasks individually that are finished together at the end of the month, task 1 is finished and creating value at the end of week 1, task 2 at the end of week 2, etc.
Quality has a better chance of being built in from the beginning when a team owns and takes responsibility of a task together. As the team works together on a backlog item, we will collectively discover our 'quality blind spots' earlier in the process and adjust accordingly. When I work on an item myself and then present it to the team for review after I'm finished, there will be more re-work required to incorporate the ideas of my team members in order to mark the item as done. It is critical to get feedback early and often in order to 'fail fast' and improve quality. Teamwork is an effective way to accomplish this.
Finally, collective team ownership promotes... teamwork. Individual accountability and responsibility tend to generate selfish behaviour (I'm working on my task that I'll be measured on so I can't help you with yours). Team accountability and responsibility builds a stronger team because if any one task is sub par it reflects on the whole team and not on an individual.
Of course, this doesn't work very well if we don't sit together - which is why we do.
Monday, October 25, 2010
My top 12 agile podcast episodes
When I mentioned at SDEC10 that much of what I know about agile I learned from podcasts, several people expressed interest in my favourites. Here are my top 12 (now 13) podcast episodes in random order:
1. Hanselminutes 119. What is Done? A conversation with Scrum co-creator Ken Schwaber. An excellent conversation that answered some of my initial questions about Agile like - how do logging, security and other infrastructure tasks fit in the back log, and of course - what is done?
2. AgileToolkit - Allistair Cockburn interview at Agile2006. Allistair talks about his evolution as a Methodologist from a hardware guy, the Crystal family of agile methodologies, his writing and much more. Crystal recognizes that one agile methodology cannot be used in all companies and tries to identify the core principles and practices that are important for all agile projects.
3. Hanselminutes 145. An overview of the SOLID principles with Robert C. Martin ("Uncle Bob"). Excellent overview with examples.
4. LeanAgileTalk 20070118. Part 1 of a great conversation with Alan Shalloway on how to apply lean principles to agile development. A good start on how to implement practices - the 'how'. Some excellent sound bites in here.
5. AgileToolkit - Uncle Bob interview at Agile2005. A brief discussion with Bob Martin on the essential principles of Agile
6. Hanselminutes 23. A short introduction to scrum.
7. Hanselminutes 31. A good introduction on Test Driven Development and the benefits. Includes discussion of the pros/cons.
8. AgileToolkit - APLN Panel discussion. Long (2 hours), but good. Here are some of the highlights:
- Worst agile transition failures? - What is required for transition? Leadership, don't do partial agile, must integrate testers on the team, need to form teams that work effectively together - Self organizing teams - is this possible? The original XP team was full of architects, but our teams may not be - how can we do this? The original backlash against architecture was that architects were responsible to standards, not to the business problem. Also - it is important to move our experts out of the corner office and off the pedestal and on to the teams as active members producing code. - Level of up front architecture required? Depends on problem/project. With a new domain, or a junior team, more architecture guidance is required. With a known domain and an experienced team, less architecture guidance is required. - Is requirements a dirty word? No - signoff is the dirty word. Requirements are good - but waiting months before implementing is not, not accepting change is not. Don't penalize people from finding errors, omissions, changes at any step. - How to do fixed price projects? One suggestion - share the risk (i.e. cost) for the first 6-8 weeks (2 or 3 iterations) to measure your velocity and gain trust. Then if client is happy, you have enough info to fix the price, and you get your "risk" back. - Why is fixed bad? You give them what they asked for, rather than what they need. Never used + rarely used features = 64% of the code (Standish report)
9. AgileToolkit - Uncle Bob explains the agile manifesto. Robert "Uncle Bob" Martin answers the
question of "What is Agile?" He goes back to the start, to the Snowbird meeting, the formation of the Agile Alliance and the drafting of the Agile Manifesto. He also looks at the core principles and key practices of Agile software development.
10. Agile Toolkit - Poppendiecks at Agile 2006. Tom and Mary discuss several topics including:
- optimize the whole, not the pieces - don't neglect one of the pillars of lean: respect people - queueing theory. Examples a defect list is a queue that should not exist. We should be mistake free after each step. Don't build on top of bad software. Also, this helps eliminate interim artifacts like 'test strategy document'. - testing phase - this is testing too late - relating lean to cooking by a master chef
- describing how clothing store Zara implemented lean (very interesting)
11. IT Conversations - Ken Schwaber. Ken talks through many of reasons why agile SW development is such a necessary change in the industry.
12. Hanselminutes 169 - TDD with Roy Osherove. Roy Osherove educates Scott on best practices in Unit Testing techniques and the Art of Unit Testing.
13. DotNetRocks show 750 - While at Prairie Dev Con in Calgary, Carl and Richard chatted with Steve Rogalsky about User Story Mapping. Steve explains how User Story Mapping helps to visual your backlog beyond a serial list of features to allow you to improve your project decisions, priorities, plans, and delivery. (Sorry - had to add this one ;)
Hope you enjoy them. Let me know if you have other favourites - I'd love to listen to them.
1. Hanselminutes 119. What is Done? A conversation with Scrum co-creator Ken Schwaber. An excellent conversation that answered some of my initial questions about Agile like - how do logging, security and other infrastructure tasks fit in the back log, and of course - what is done?
2. AgileToolkit - Allistair Cockburn interview at Agile2006. Allistair talks about his evolution as a Methodologist from a hardware guy, the Crystal family of agile methodologies, his writing and much more. Crystal recognizes that one agile methodology cannot be used in all companies and tries to identify the core principles and practices that are important for all agile projects.
3. Hanselminutes 145. An overview of the SOLID principles with Robert C. Martin ("Uncle Bob"). Excellent overview with examples.
4. LeanAgileTalk 20070118. Part 1 of a great conversation with Alan Shalloway on how to apply lean principles to agile development. A good start on how to implement practices - the 'how'. Some excellent sound bites in here.
5. AgileToolkit - Uncle Bob interview at Agile2005. A brief discussion with Bob Martin on the essential principles of Agile
6. Hanselminutes 23. A short introduction to scrum.
7. Hanselminutes 31. A good introduction on Test Driven Development and the benefits. Includes discussion of the pros/cons.
8. AgileToolkit - APLN Panel discussion. Long (2 hours), but good. Here are some of the highlights:
- Worst agile transition failures? - What is required for transition? Leadership, don't do partial agile, must integrate testers on the team, need to form teams that work effectively together - Self organizing teams - is this possible? The original XP team was full of architects, but our teams may not be - how can we do this? The original backlash against architecture was that architects were responsible to standards, not to the business problem. Also - it is important to move our experts out of the corner office and off the pedestal and on to the teams as active members producing code. - Level of up front architecture required? Depends on problem/project. With a new domain, or a junior team, more architecture guidance is required. With a known domain and an experienced team, less architecture guidance is required. - Is requirements a dirty word? No - signoff is the dirty word. Requirements are good - but waiting months before implementing is not, not accepting change is not. Don't penalize people from finding errors, omissions, changes at any step. - How to do fixed price projects? One suggestion - share the risk (i.e. cost) for the first 6-8 weeks (2 or 3 iterations) to measure your velocity and gain trust. Then if client is happy, you have enough info to fix the price, and you get your "risk" back. - Why is fixed bad? You give them what they asked for, rather than what they need. Never used + rarely used features = 64% of the code (Standish report)
9. AgileToolkit - Uncle Bob explains the agile manifesto. Robert "Uncle Bob" Martin answers the
question of "What is Agile?" He goes back to the start, to the Snowbird meeting, the formation of the Agile Alliance and the drafting of the Agile Manifesto. He also looks at the core principles and key practices of Agile software development.
10. Agile Toolkit - Poppendiecks at Agile 2006. Tom and Mary discuss several topics including:
- optimize the whole, not the pieces - don't neglect one of the pillars of lean: respect people - queueing theory. Examples a defect list is a queue that should not exist. We should be mistake free after each step. Don't build on top of bad software. Also, this helps eliminate interim artifacts like 'test strategy document'. - testing phase - this is testing too late - relating lean to cooking by a master chef
- describing how clothing store Zara implemented lean (very interesting)
11. IT Conversations - Ken Schwaber. Ken talks through many of reasons why agile SW development is such a necessary change in the industry.
12. Hanselminutes 169 - TDD with Roy Osherove. Roy Osherove educates Scott on best practices in Unit Testing techniques and the Art of Unit Testing.
13. DotNetRocks show 750 - While at Prairie Dev Con in Calgary, Carl and Richard chatted with Steve Rogalsky about User Story Mapping. Steve explains how User Story Mapping helps to visual your backlog beyond a serial list of features to allow you to improve your project decisions, priorities, plans, and delivery. (Sorry - had to add this one ;)
Hope you enjoy them. Let me know if you have other favourites - I'd love to listen to them.
Saturday, September 18, 2010
FitNesse and today's date with .net
I have an acceptance test that says I need to validate the age of majority in each of the different states and provinces. Here is a simple example:
The test above is faily simple and I could write it like this in the wiki as a Column Fixture (using the fitSharp.dll to test C# code)
The problem of course is that this test will start failing on January 5, 2013 when Mary turns 18. Also, it does not perform the boundary testing that I would like it to do in order to test someone who is 18 today vs. someone who will turn 18 tomorrow. In order to improve this test, I investigated some other date functions in FitNesse and a plugin by James Carr that allowed you to add days to the current date. These work ok for smaller calculations like "Given document ABC, When it is 30 days old, Then archive it". However, this would be a little more cumbersome for birth dates when adding 18 years (esp. with leap year calculations) and the !today function in FitNesse does not work in ColumnFixture wiki tables. So, I found a simple way to meet my requirement.
First, I wrote a class in C# that accepts two parameters to Add or Subtract Years and Days to the current date. The class uses C#'s simple DateTime addition to add or subtract the years/days from today and returns the result. You could easily extend this to add months or add other functionality required in your tests:
Then in FitNesse at the top of my script for this story I call GetDateBasedOnToday and store the resulting values in FitNesse variables. Finally, I use the variable names through my script to reference the underage and of age birth dates. Here is an example:
In FitNesse, the final result including the acceptance criteria above looks like this:
(Note: The example above should probably be written as a unit test because it is fairly straightforward, but it simply illustrates how to use the date logic that I'm using as part of larger acceptance tests.)
Given Mary who is born January 5, 1995 and lives in Manitoba
When she asks if she is the age of majority
Then return no
The test above is faily simple and I could write it like this in the wiki as a Column Fixture (using the fitSharp.dll to test C# code)
!|Check Age of Majority|
|Province State|Birth Date|Am I Underage?|
|MB |5-Jan-1995|Yes |
The problem of course is that this test will start failing on January 5, 2013 when Mary turns 18. Also, it does not perform the boundary testing that I would like it to do in order to test someone who is 18 today vs. someone who will turn 18 tomorrow. In order to improve this test, I investigated some other date functions in FitNesse and a plugin by James Carr that allowed you to add days to the current date. These work ok for smaller calculations like "Given document ABC, When it is 30 days old, Then archive it". However, this would be a little more cumbersome for birth dates when adding 18 years (esp. with leap year calculations) and the !today function in FitNesse does not work in ColumnFixture wiki tables. So, I found a simple way to meet my requirement.
First, I wrote a class in C# that accepts two parameters to Add or Subtract Years and Days to the current date. The class uses C#'s simple DateTime addition to add or subtract the years/days from today and returns the result. You could easily extend this to add months or add other functionality required in your tests:
namespace FitNesseTutorial.Tests
{
public class GetDateBasedOnToday : ColumnFixture
{
public int AddYears;
public int AddDays;
public DateTime ResultingDate()
{
return DateTime.Today.AddYears(AddYears).AddDays(AddDays);
}
}
}
The FitNesse script:
''Get underage and of age dates for 18 and 19 year olds''
!|Get Date Based On Today |
|Add Years|Add Days|Resulting Date?|
|-18 |1 |>>UNDERAGE_18 |
|-19 |1 |>>UNDERAGE_19 |
|-18 |0 |>>OFAGE_18 |
|-19 |0 |>>OFAGE_19 |
!|Check Age of Majority|
|Province State|Birth Date |Am I Underage?|
|MB |<<OFAGE_18 |Yes |
|MB |<<UNDERAGE_18|No |
|BC |<<OFAGE_19 |Yes |
|BC |<<UNDERAGE_19|No |
In FitNesse, the final result including the acceptance criteria above looks like this:
(Note: The example above should probably be written as a unit test because it is fairly straightforward, but it simply illustrates how to use the date logic that I'm using as part of larger acceptance tests.)
Sunday, August 15, 2010
Agile readiness assessments at Agile2010
I've posted the images from the session "Look before you leap - Agile readiness assessments done right" at Agile2010. The images are here http://tinyurl.com/26bwlzw.
Some of the images are blurry in my browser (IE), but if you 'Save As' to your computer then you get more detail (not sure why). If there are any images you need more detail on, let me know - I still have all the originals.
Some of the images are blurry in my browser (IE), but if you 'Save As' to your computer then you get more detail (not sure why). If there are any images you need more detail on, let me know - I still have all the originals.
Saturday, August 14, 2010
Agile2010 - My Conference Summary
My user stories for the conference were:
- Learn more about coaching and agile assessments (done)
- Get ideas for promoting and increasing agile adoption (done)
- Find better ways to use FitNesse and Selenium (done)
- Acquire some new tools for expressing user needs (done)
- Meet many of the people I’ve been following over the years (done)
- Meet some of the AA-FTT folks (done)
In addition, I encountered a passionate community that is willing to share their time, skills and ideas with anyone who asks. In addition, I encountered a troubled group of leaders that is legitimately concerned about the lack of technical content at the conference. In addition, I encountered leaders advocating for agile in all its flavours instead of promoting any specific one (hurray!). In addition, I encountered people passionate about taking what they have learned in agile and using that to improve communities outside of the business world. In addition, I encountered a few technical folks who undervalue mastery of the ‘soft’ skills over the technical skills. In addition, I encountered new friends that I hope to see again next year. In addition, I encountered a dedicated and friendly volunteer staff that helped make the conference run smoothly. In addition, I encountered a community willing to volunteer their time and money towards a cause (mano-a-mano).
Finally, I encountered a growing and dedicated community that has a lot of success stories to share, some challenges in the road ahead, and hopefully a commitment to continue the fight together.
Thanks all.
- Learn more about coaching and agile assessments (done)
- Get ideas for promoting and increasing agile adoption (done)
- Find better ways to use FitNesse and Selenium (done)
- Acquire some new tools for expressing user needs (done)
- Meet many of the people I’ve been following over the years (done)
- Meet some of the AA-FTT folks (done)
In addition, I encountered a passionate community that is willing to share their time, skills and ideas with anyone who asks. In addition, I encountered a troubled group of leaders that is legitimately concerned about the lack of technical content at the conference. In addition, I encountered leaders advocating for agile in all its flavours instead of promoting any specific one (hurray!). In addition, I encountered people passionate about taking what they have learned in agile and using that to improve communities outside of the business world. In addition, I encountered a few technical folks who undervalue mastery of the ‘soft’ skills over the technical skills. In addition, I encountered new friends that I hope to see again next year. In addition, I encountered a dedicated and friendly volunteer staff that helped make the conference run smoothly. In addition, I encountered a community willing to volunteer their time and money towards a cause (mano-a-mano).
Finally, I encountered a growing and dedicated community that has a lot of success stories to share, some challenges in the road ahead, and hopefully a commitment to continue the fight together.
Thanks all.
Day 5 at Agile2010
The final day of the conference contained three general sessions. I found a few people who skipped out on these sessions - too bad for them as the sessions were a great wrap up for the entire week.
Dave West talked about Product-Centric Development and the move away from the separation of business and IT (yes please!). He asked us to start measuring ourselves and our teams by how much value we deliver and not by on-time, on-budget, # of defects, # of stories, lines of code, etc. We can’t make our teams act as part of the business unless we change our measurements.
Ron Jeffries and Chet Hendrickson provided both comic relief and poignant commentary. I think Chet's comment sums up their talk: "there’s lots of ideas out there and we need to look at every damn one of them”.
Finally, Mike Cohn's talk was a great ending to the conference as he challenged us with some practical ideas of how to spread what we’ve learned using the ADAPT model. Create Awareness of the problem by communicating using metrics and stories. Focus on one or two reasons to change. Increase the Desire to change by communicating that there is a better way. Get the team to take agile for a test drive and focus on addressing any fears. Develop the Ability to work in an agile manner by providing coaching and training. Promote agile by publicizing success stories or holding agile safaris where people can drop in to agile teams for a short time to see how it works. Transfer agile to all non development teams, departments, divisions, etc. Align promotions, raises, HR, and Marketing. Finally, don’t expect an agile transition to happen all at once. Create an improvement backlog and improvement communities and work on a few stories that are important to your community before tackling the next ones.
A final call to action from Mike: “Now we’ve upped our skills, up yours!” Well said.
Dave West talked about Product-Centric Development and the move away from the separation of business and IT (yes please!). He asked us to start measuring ourselves and our teams by how much value we deliver and not by on-time, on-budget, # of defects, # of stories, lines of code, etc. We can’t make our teams act as part of the business unless we change our measurements.
Ron Jeffries and Chet Hendrickson provided both comic relief and poignant commentary. I think Chet's comment sums up their talk: "there’s lots of ideas out there and we need to look at every damn one of them”.
Finally, Mike Cohn's talk was a great ending to the conference as he challenged us with some practical ideas of how to spread what we’ve learned using the ADAPT model. Create Awareness of the problem by communicating using metrics and stories. Focus on one or two reasons to change. Increase the Desire to change by communicating that there is a better way. Get the team to take agile for a test drive and focus on addressing any fears. Develop the Ability to work in an agile manner by providing coaching and training. Promote agile by publicizing success stories or holding agile safaris where people can drop in to agile teams for a short time to see how it works. Transfer agile to all non development teams, departments, divisions, etc. Align promotions, raises, HR, and Marketing. Finally, don’t expect an agile transition to happen all at once. Create an improvement backlog and improvement communities and work on a few stories that are important to your community before tackling the next ones.
A final call to action from Mike: “Now we’ve upped our skills, up yours!” Well said.
Day 4 at Agile2010
Two of today's sessions were more about aquiring ammunition and ideas for my own future talks than about aquiring new skills. In confessions of a Flow Junkie, Dave Rooney introduced me to the coin flipping game which contrasts the flow in Agile vs Waterfall. In the final session of the day, James Shore and Arlo Belshee made us laugh and cry with their Bloody Stupid Johnson routine. The highlight of the session is the soon to be framed certificate that I attained as an "Agile Software Specialist" or A.S.S.
The session with Gerry Kirk and Michael Sahota was designed to create a knowledge base of methods and tools for doing agile readiness assessments. It was great to see ideas from other coaches and I look forward to the compiled results.
The conference party at Epcot was also a lot of fun and I enjoyed some non-agile time with my agile friends.
The session with Gerry Kirk and Michael Sahota was designed to create a knowledge base of methods and tools for doing agile readiness assessments. It was great to see ideas from other coaches and I look forward to the compiled results.
The conference party at Epcot was also a lot of fun and I enjoyed some non-agile time with my agile friends.
Thursday, August 12, 2010
Day 3 at Agile2010
Here is a summary of another satisfying day:
I was introduced to some powerful team wireframing techniques that could be incorporated into the discovery phase of a project as another tool to aid in creating and defining the backlog. I wonder what my UX friends would think of doing UX as a whole team. It definitely fits the agile model where every team member provides input and takes responsibility for the whole process, not just their specialty.
The 11am session reminded me of the Trident Splash commercials - I wasn't prepared for the amount of information I received from Jeff Patton and I'll have to review my notes and his slides several times. What I do know is that there was a lot of valuable information on how to do agile discovery. The techniques were designed to help you get to a definition of ready - ready to start sprinting. Some quotes I wrote down: "Our job is to minimize output, and maximize outcome"; "Most agile practives emphasize delivery, and not much discovery", and as an example of that second quote: "velocity is a measurement of output, not outcome". A good reminder to focus on why the project was started in the first place. Update 8/31: The slides are now available here
The first session after lunch I joined the presenter up front as a timer object. Brian Marick was explaining OO programming practices to non-programmers by using volunteers who acted as objects. He explained SRP, encapsulation, MVC, etc. You can see a partial video of a previous talk he did on the topic at http://vimeo.com/13506935
Not shockingly, in the last session I learned I was using Selenium in the most basic (and mostly abhorred) way. I was able to spend some time with Patrick Wilson-Welsh after the session in a mini open jam to go through some coding examples of how to do it the 'right' way. Good thing I haven't given my Selenium SDEC presentation yet. Thanks Patrick for spending the extra time - you are one of many at the conference that have shared your time and energy with me and others.
The day ended when I didn't win an iPad, but I did thoroughly enjoy playing beach volleyball in the dark until almost midnight with occasional swim breaks. Thanks to everyone who participated - it was a great way to give my brain a rest.
We're at epcot tomorrow night until midnight so I don't think I'll be posting my Day 4 notes until Friday sometime.
I was introduced to some powerful team wireframing techniques that could be incorporated into the discovery phase of a project as another tool to aid in creating and defining the backlog. I wonder what my UX friends would think of doing UX as a whole team. It definitely fits the agile model where every team member provides input and takes responsibility for the whole process, not just their specialty.
The 11am session reminded me of the Trident Splash commercials - I wasn't prepared for the amount of information I received from Jeff Patton and I'll have to review my notes and his slides several times. What I do know is that there was a lot of valuable information on how to do agile discovery. The techniques were designed to help you get to a definition of ready - ready to start sprinting. Some quotes I wrote down: "Our job is to minimize output, and maximize outcome"; "Most agile practives emphasize delivery, and not much discovery", and as an example of that second quote: "velocity is a measurement of output, not outcome". A good reminder to focus on why the project was started in the first place. Update 8/31: The slides are now available here
The first session after lunch I joined the presenter up front as a timer object. Brian Marick was explaining OO programming practices to non-programmers by using volunteers who acted as objects. He explained SRP, encapsulation, MVC, etc. You can see a partial video of a previous talk he did on the topic at http://vimeo.com/13506935
Not shockingly, in the last session I learned I was using Selenium in the most basic (and mostly abhorred) way. I was able to spend some time with Patrick Wilson-Welsh after the session in a mini open jam to go through some coding examples of how to do it the 'right' way. Good thing I haven't given my Selenium SDEC presentation yet. Thanks Patrick for spending the extra time - you are one of many at the conference that have shared your time and energy with me and others.
The day ended when I didn't win an iPad, but I did thoroughly enjoy playing beach volleyball in the dark until almost midnight with occasional swim breaks. Thanks to everyone who participated - it was a great way to give my brain a rest.
We're at epcot tomorrow night until midnight so I don't think I'll be posting my Day 4 notes until Friday sometime.
Tuesday, August 10, 2010
Day 2 at Agile2010
Today was another satisfying day at the conference in 'sunny' (I did see the sun this morning) Orlando. I started the day off with 11 other agilists going for a morning run on the paths and walkways around the Disney complex followed by a great breakfast. The catering at the Dolphin has been fantastic so far.
The first of two notable sessions that I attended was called "Effective Questions for an Agile Coach." The two presenters Arto and Sami from Reaktor were great and cleverly crafted the table discussions so that we would fall into common coaching traps and then helped demonstrate better alternatives. Here is a brief overview of some of the ideas from the presentation:
- By giving advice you are creating motivation from the outside. You need to ask effective questions to let them figure it out for themselves
- The four acceptance tests for good coaching questions are a) leads to exploration, b) aim at descriptive answers, c) avoids judgement and d) avoids unproductive states of mind
- Avoid the question 'why'. Try converting the question into a What, When, How Much or How many. For example, instead of Why, ask What benefit did you expect to receive?
- When trying to help the team solve a problem, follow the GROW model. Grow: first, find their goal. Reality: Second, ask questions to help them describe the current state. Options: Third, ask questions to find at least 3 options. Simply ask how would you solve this. What: Finally, ask questions to find agreement on a path forward.
Some other random thoughts:
- I attended an www.innovationgames.com seminar. After playing The Product Tree game and Scream, I want to explore how to introduce these games in my own projects.
- Dave Thomas was funny and poignant as the keynote speaker although I did wonder if his talk was targeted more at those not at the conference than those of us who have already 'taken the pill'. They videotaped his talk and I suggest looking for it in the next few weeks.
- I took some advice from other conference veterans and walked out of a session that was covering topics I was already familiar with. As a result, I had some great conversations about retrospectives and the intersection of agile and church.
- The 'soft skill' sessions at this conference have been great, but I wonder if there is room for more advanced developer topics.
- Played some beach volleyball at the end of the day with 2 other Canadians and a Swede. Are there any Americans at this conference?
More tomorrow... looking forward to the open jam on ATDD/BDD wording.
The first of two notable sessions that I attended was called "Effective Questions for an Agile Coach." The two presenters Arto and Sami from Reaktor were great and cleverly crafted the table discussions so that we would fall into common coaching traps and then helped demonstrate better alternatives. Here is a brief overview of some of the ideas from the presentation:
- By giving advice you are creating motivation from the outside. You need to ask effective questions to let them figure it out for themselves
- The four acceptance tests for good coaching questions are a) leads to exploration, b) aim at descriptive answers, c) avoids judgement and d) avoids unproductive states of mind
- Avoid the question 'why'. Try converting the question into a What, When, How Much or How many. For example, instead of Why, ask What benefit did you expect to receive?
- When trying to help the team solve a problem, follow the GROW model. Grow: first, find their goal. Reality: Second, ask questions to help them describe the current state. Options: Third, ask questions to find at least 3 options. Simply ask how would you solve this. What: Finally, ask questions to find agreement on a path forward.
Some other random thoughts:
- I attended an www.innovationgames.com seminar. After playing The Product Tree game and Scream, I want to explore how to introduce these games in my own projects.
- Dave Thomas was funny and poignant as the keynote speaker although I did wonder if his talk was targeted more at those not at the conference than those of us who have already 'taken the pill'. They videotaped his talk and I suggest looking for it in the next few weeks.
- I took some advice from other conference veterans and walked out of a session that was covering topics I was already familiar with. As a result, I had some great conversations about retrospectives and the intersection of agile and church.
- The 'soft skill' sessions at this conference have been great, but I wonder if there is room for more advanced developer topics.
- Played some beach volleyball at the end of the day with 2 other Canadians and a Swede. Are there any Americans at this conference?
More tomorrow... looking forward to the open jam on ATDD/BDD wording.
Monday, August 9, 2010
Some thoughts from day 1 of Agile2010
I started the day with Mary Poppendieck's talk "Leader's Workshop: Making Change Happen and Making it Stick" which was based on the following book: Switch: How to Change things when change is hard. The talk was split into 3 parts. First you need to Motivate the Elephant, then you need to Direct the Rider, and finally you need to Shape the Path.
Mary suggested that in order to Motivate people, you need to treat them like volunteers. You need to treat them like they could leave at any time. A quote from Peter Drucker: "They need, above all, challenge. They need to know the organization's mission; believe in it, they need to see the results". As our table discussed this concept, we were able to easily relate to our own stories of leading youth at church or in boyscouts. I think this would be a great way to be treated and I can see how it would translate into energized and passionate employees. A volunteer team has to be engaged or they will disappear.
The purpose of Directing the Rider is to provide clear direction. One of the ways to do this when change is difficult is to find the bright spot. When you are having trouble implementing a change, look for some small success and then duplicate it. She gave a great example about post-it notes. When 3M first made the post-it notes, they could not sell them. They test marketed them in several locations and they only sold them in one ("the bright spot"). It turns out that the sales rep in Richmond Virginia decided to give them away and once he did everyone wanted one. 3M then followed this model in other locations and now post-it notes are a household item (and a valuable agile tool!). So, to direct the rider in difficult situations, find instances of success and clone it. A book that she references is: Positive Deviance: Influence: The Power to Change Anything"
Finally, she suggests Shaping the Path by looking at the long term and allowing local decision making. You also need to find ways to make the desired change the path of least resistance. "Change will only stick when the path of least resistance is the path of change." IBM's move towards agile was used as an example. Instead of forcing agile, they allowed it to succeed in smaller teams and then sold and promoted those successes. Soon, everyone wanted to do it.
In summary, to encourage change in a team when it is difficult a) treat your team as volunteers b) find the bright spot and clone it and finally c) make the desired change the path of least resistance.
In the afternoon I went to Hacker Chick and Dawn Cannan's hands on presentation "Better Story Testing through Programmer-Tester Pairing". We had fun doing developer/tester pairing of acceptance tests in FitNesse and Java. I learned a few new FitNesse tricks and also that I haven't lost all my dev skills. I played the dev role and our team was the first to complete the assigned task, beating some notable names in the room <cough>Brian Marick</cough>. The session also re-inforced ATDD and gave me some ideas I'd like to incorporate into a future presentation.
Also, I missed Janet Gregory's talk this morning on the Dance of QA in agile, but I managed to talk to her this evening at the mixer. She gave a quick summary of her talk and how she related it to dance. Agile team members need to be like contestants on "So You Think You Can Dance". On the show, hip-hop dancers learn other dance styles like ballet and vice versa. Similarly, agile team members (including QA) need to improve their skills in all of the disciplines in a project team. An interesting thought.
Thanks to everyone - a great first day.
In other news, Bob Payne is now following me. #Stalker.
Mary suggested that in order to Motivate people, you need to treat them like volunteers. You need to treat them like they could leave at any time. A quote from Peter Drucker: "They need, above all, challenge. They need to know the organization's mission; believe in it, they need to see the results". As our table discussed this concept, we were able to easily relate to our own stories of leading youth at church or in boyscouts. I think this would be a great way to be treated and I can see how it would translate into energized and passionate employees. A volunteer team has to be engaged or they will disappear.
The purpose of Directing the Rider is to provide clear direction. One of the ways to do this when change is difficult is to find the bright spot. When you are having trouble implementing a change, look for some small success and then duplicate it. She gave a great example about post-it notes. When 3M first made the post-it notes, they could not sell them. They test marketed them in several locations and they only sold them in one ("the bright spot"). It turns out that the sales rep in Richmond Virginia decided to give them away and once he did everyone wanted one. 3M then followed this model in other locations and now post-it notes are a household item (and a valuable agile tool!). So, to direct the rider in difficult situations, find instances of success and clone it. A book that she references is: Positive Deviance: Influence: The Power to Change Anything"
Finally, she suggests Shaping the Path by looking at the long term and allowing local decision making. You also need to find ways to make the desired change the path of least resistance. "Change will only stick when the path of least resistance is the path of change." IBM's move towards agile was used as an example. Instead of forcing agile, they allowed it to succeed in smaller teams and then sold and promoted those successes. Soon, everyone wanted to do it.
In summary, to encourage change in a team when it is difficult a) treat your team as volunteers b) find the bright spot and clone it and finally c) make the desired change the path of least resistance.
In the afternoon I went to Hacker Chick and Dawn Cannan's hands on presentation "Better Story Testing through Programmer-Tester Pairing". We had fun doing developer/tester pairing of acceptance tests in FitNesse and Java. I learned a few new FitNesse tricks and also that I haven't lost all my dev skills. I played the dev role and our team was the first to complete the assigned task, beating some notable names in the room <cough>Brian Marick</cough>. The session also re-inforced ATDD and gave me some ideas I'd like to incorporate into a future presentation.
Also, I missed Janet Gregory's talk this morning on the Dance of QA in agile, but I managed to talk to her this evening at the mixer. She gave a quick summary of her talk and how she related it to dance. Agile team members need to be like contestants on "So You Think You Can Dance". On the show, hip-hop dancers learn other dance styles like ballet and vice versa. Similarly, agile team members (including QA) need to improve their skills in all of the disciplines in a project team. An interesting thought.
Thanks to everyone - a great first day.
In other news, Bob Payne is now following me. #Stalker.
Monday, July 5, 2010
Using Selenium for Meta tag testing
I recently tweeted that I had figured out how to use Selenium to test for meta tag content. Here is in an example using http://www.protegra.com/. On the Protegra home page, I wanted to make sure the following 2 tags existed:
<meta name="title" content="Protegra.com" />
<meta name="description" content="Business. Technology. Solutions." />
Obviously, meta tags don't show on the page at all, but do exist in the HTML. To test this manually, I would need to open the page in a browser, click view source, search for 'meta' and then manually compare the results to the expected meta tags. To test this using Selenium, all you need to do is create a test case that opens up Protegra.com and then uses VerifyElementPresent to search for each meta tag. The VerifyElementPresent command allows you to enter the name of the meta tag and the expected content in the format of :
//meta[@name='name of the meta tag' and @content='the expected content']
The Selenium test case html for the two meta tags I'm looking for looks like this:
<tr>
<td>verifyElementPresent</td>
<td>//meta[@name='title' and @content='Protegra.com']</td>
<td></td>
</tr>
<tr>
<td>verifyElementPresent</td>
<td>//meta[@name='description' and @content='Business. Technology. Solutions.']</td>
<td></td>
</tr>
Now I can run this test repeatedly to test for the expected meta tag content on the home page whenever I want without using manual effort. I can also create similar tests for each page throughout the site and run them all consecutively. Additionally, I could use Excel to generate the Selenium test case html for all my pages and meta tags without having to write the html manually or I could export this test case into C# (or any of the other programming languages supported by Selenium) and transform this test to lookup pages and meta tag values in a database table. Fast and easy.
<meta name="title" content="Protegra.com" />
<meta name="description" content="Business. Technology. Solutions." />
Obviously, meta tags don't show on the page at all, but do exist in the HTML. To test this manually, I would need to open the page in a browser, click view source, search for 'meta' and then manually compare the results to the expected meta tags. To test this using Selenium, all you need to do is create a test case that opens up Protegra.com and then uses VerifyElementPresent to search for each meta tag. The VerifyElementPresent command allows you to enter the name of the meta tag and the expected content in the format of :
//meta[@name='name of the meta tag' and @content='the expected content']
The Selenium test case html for the two meta tags I'm looking for looks like this:
<tr>
<td>verifyElementPresent</td>
<td>//meta[@name='title' and @content='Protegra.com']</td>
<td></td>
</tr>
<tr>
<td>verifyElementPresent</td>
<td>//meta[@name='description' and @content='Business. Technology. Solutions.']</td>
<td></td>
</tr>
Now I can run this test repeatedly to test for the expected meta tag content on the home page whenever I want without using manual effort. I can also create similar tests for each page throughout the site and run them all consecutively. Additionally, I could use Excel to generate the Selenium test case html for all my pages and meta tags without having to write the html manually or I could export this test case into C# (or any of the other programming languages supported by Selenium) and transform this test to lookup pages and meta tag values in a database table. Fast and easy.
Friday, July 2, 2010
But does it work? - an agile metric
I've said publicly at conferences and other gatherings that my passion for agile and lean began years ago after a particularly troubling project that tried to be agile. While that project had a strong team and eventually delivered a product, it had trouble with quality, scope and budget. In retrospect the biggest problem was that we had little knowledge of what it meant to be agile - our process was flawed. As a leader of that team, I took responsibility for the result and began a search to understand agile. Borrowing a phrase from the agile manifesto, I wanted to 'uncover better ways'.
After implementing several changes to our process, my projects over the years seem to have improved significantly. But how do you measure this? While no metric should stand alone, here is one quality metric that I'm experimenting with:
((# of high defects * 5) + (# of medium defects * 3) + (# of low defects * 1) / Total project hours * 100.
The 'troubled' project had a score of 18.7. My most recent project score was 1.2 which is almost a 1600% improvement on quality. I think I'll keep doing this agile thing.
P.S. I'm heading to Agile2010 this summer. Give me a shout if you are going and we can find ways to de-brief together over lunch or dinner in Orlando.
After implementing several changes to our process, my projects over the years seem to have improved significantly. But how do you measure this? While no metric should stand alone, here is one quality metric that I'm experimenting with:
((# of high defects * 5) + (# of medium defects * 3) + (# of low defects * 1) / Total project hours * 100.
The 'troubled' project had a score of 18.7. My most recent project score was 1.2 which is almost a 1600% improvement on quality. I think I'll keep doing this agile thing.
P.S. I'm heading to Agile2010 this summer. Give me a shout if you are going and we can find ways to de-brief together over lunch or dinner in Orlando.
Tuesday, June 8, 2010
User Stories in more detail
Several people at the conference last week asked me for more information about how to write user stories. I could write a long blog with lots of examples and explanation, but someone has already done that. Check out Scott Ambler's post here.
While Scott recommends capturing stories on cards, I like to capture them initially in a spreadsheet because it makes it easier to organize and prioritize. Once the project starts I print out the cards - instructions are in one of my earlier blogs.
While Scott recommends capturing stories on cards, I like to capture them initially in a spreadsheet because it makes it easier to organize and prioritize. Once the project starts I print out the cards - instructions are in one of my earlier blogs.
Monday, June 7, 2010
The last shall be first
At what point in the project do you start your testing effort? In many project plans that I have seen, QA and/or UAT is added to the end of the project. The development team hands over the code to the testing team who then writes and executes the scripts. The bugs are passed to the developer and the test & fix cycle begins. What fun.
Here is an alternative to the test & fix cycle:
1. Write test scripts before development starts on any particular feature. Test scripts help confirm the requirements with the client and allow developers to have a full understanding of how the code must work. The test scripts function as executable requirements and answer "How will I know when I'm done".
2. Minimize the time between when a features has been developed and when it is tested. This allows you to find defects or requirement misunderstanding early so that developers do not re-build the same defects or misunderstandings into future features. This helps increase the quality of the system while decreasing the time spent in the test-fix-test-fix cycle. Complete testing of any feature should occur in the same iteration that the code is written and should be part of your 'done' criteria.
Both of these two simple steps are based on the Lean Principles of Eliminate Waste and Build Quality In. For more information, check out this article from NetObjectives.
Here is an alternative to the test & fix cycle:
1. Write test scripts before development starts on any particular feature. Test scripts help confirm the requirements with the client and allow developers to have a full understanding of how the code must work. The test scripts function as executable requirements and answer "How will I know when I'm done".
2. Minimize the time between when a features has been developed and when it is tested. This allows you to find defects or requirement misunderstanding early so that developers do not re-build the same defects or misunderstandings into future features. This helps increase the quality of the system while decreasing the time spent in the test-fix-test-fix cycle. Complete testing of any feature should occur in the same iteration that the code is written and should be part of your 'done' criteria.
Both of these two simple steps are based on the Lean Principles of Eliminate Waste and Build Quality In. For more information, check out this article from NetObjectives.
Tuesday, June 1, 2010
PrairieDevCon Presentation links
Here are some additional links for my presentations at PrairieDevCon this week.
Introduction to Lean and Agile:
Planning Poker:
Introduction to Lean and Agile:
- Books
- Lean Software Development. Tom and Mary Poppendieck (2003)
- User Stories Applied. Mike Cohn (2004)
- The Art of Agile Development. James Shore and Shane Warden (2008)
- The Art of Lean Software Development. Curt Hibbs, Steve Jewett, Mike Sullivan (2009)
- Agile Estimating and Planning. Mike Cohn (2005)
- Sites
- Podcast Sites
- Newsgroups
- Other links to articles, blogs, videos
- www.martinfowler.com/articles/newMethodology.html
- http://agileinaflash.blogspot.com/2009/08/12-principles-for-agile-software.html
- www.infoq.com/Agile2009 (Videos from Agile 2009)
- http://www.netobjectives.com/files/BusinessCaseForAgility.pdf (requires account registration)
- http://groups.google.com/group/agile-developer-skills/web/draft-summary-of-chicago-meeting?hl=en&pli=1
- http://www.agilemanifesto.org/history.html - History of the Manifesto
- http://www.devx.com/architect/Article/32836/0/page/4 - comparison of seven popular agile methodologies
Planning Poker:
- Links
- www.planningpoker.com/detail.html
- http://www.youtube.com/watch?v=fb9Rzyi8b90&feature=PlayList&p=3F5BBA263D7DF99C&playnext=1&playnext_from=PL&index=2
- - Video of Mike Cohn explaining Planning Poker
Friday, April 30, 2010
List of Agile practices
I read an interesting list of agile practices at Naresh Jain's blog: Explosion of Agile Practices. I think there is a lot of overlap in his list, but it made me re-visit what I think the core list of lean and agile practices should be. My list is below in no particular order. Each item can be expanded - for example, Technical Excellence implies TDD, simple design, following SOLID principles, etc.
1. Daily Stand-ups
2. Visual Project Management
3. Customer Accessibility
4. Technical Excellence
5. Frequent Delivery
6. Frequent Retrospectives
7. Continuous Improvement
8. Continuous Integration
9. Co-located teams
10. Iteration Planning
11. Team Estimating
12. Acceptance Tests (and automation of those tests)
13. User Stories
14. Self Organized Teams
16. Iteration Demos
Update 8/15/2010:
- And one more... Deliver Value!
1. Daily Stand-ups
2. Visual Project Management
3. Customer Accessibility
4. Technical Excellence
5. Frequent Delivery
6. Frequent Retrospectives
7. Continuous Improvement
8. Continuous Integration
9. Co-located teams
10. Iteration Planning
11. Team Estimating
12. Acceptance Tests (and automation of those tests)
13. User Stories
14. Self Organized Teams
16. Iteration Demos
Update 8/15/2010:
- And one more... Deliver Value!
Saturday, March 27, 2010
Agile scope completion techniques
One of the questions I've received in the past about agile techniques is how to ensure you've captured enough detail about your requirements in order to proceed without missing major scope elements.
Whether you are using story cards, features or other techniques to capture your requirements, you need to answer this question: "How do I know when I've done enough requirements gathering?" In waterfall this is ‘easy’ – gather all the detail and sign-off (ok – I’m simplifying). In agile, we depend on features or stories, but many are concerned that major scope elements will be left out which will either cause many items to grow exponentially in size or that feature X is really feature X, Y and Z. For example, when the registration screen has 50 fields instead of the 10-15 that we might have assumed, but didn’t write down. It is hard to understand how this can be done in 1 or 2 days using feature or story cards that contain only one line of description, a few lines of acceptance and a few assumptions.
Three things for you to consider to help you solve this dilemna:
1. In waterfall techniques, although we hold some comfort in our massive requirements documents, we know from experience that even then things will change and things will be missed.
2. My teams estimate using planning poker with the full team including the client and we have found this has helped to uncover hidden or unknown scope. We discuss each item together before estimating and talk about the number of screens, inputs, outputs, services etc involved. This discussion itself often uncovers additional scope, but so does the estimating that follows each discussion. For example, when most of us say ‘2’ and one person says ‘8’, the person who said ‘8’ enlightens the team on the complex caching required to meet the performance requirements listed as an Acceptance test. This is especially important if your client is the one with the highest estimate. Don't ignore it.
3. Lastly, I attended a virtual class on agile estimating that suggested another technique. For every feature or story, categorize the requirements certainty as high, medium and low. Keep challenging your client until the requirements certainty on each story is 'low'.
I'd be interested in other techniques you may be using to keep the initial requirements gathering phase light weight, yet complete. I think as an industry we are getting better at embracing the changes that are inevitable on all projects, but our clients still require us to have a good understanding of the known scope and the resulting estimate before starting the project.
Whether you are using story cards, features or other techniques to capture your requirements, you need to answer this question: "How do I know when I've done enough requirements gathering?" In waterfall this is ‘easy’ – gather all the detail and sign-off (ok – I’m simplifying). In agile, we depend on features or stories, but many are concerned that major scope elements will be left out which will either cause many items to grow exponentially in size or that feature X is really feature X, Y and Z. For example, when the registration screen has 50 fields instead of the 10-15 that we might have assumed, but didn’t write down. It is hard to understand how this can be done in 1 or 2 days using feature or story cards that contain only one line of description, a few lines of acceptance and a few assumptions.
Three things for you to consider to help you solve this dilemna:
1. In waterfall techniques, although we hold some comfort in our massive requirements documents, we know from experience that even then things will change and things will be missed.
2. My teams estimate using planning poker with the full team including the client and we have found this has helped to uncover hidden or unknown scope. We discuss each item together before estimating and talk about the number of screens, inputs, outputs, services etc involved. This discussion itself often uncovers additional scope, but so does the estimating that follows each discussion. For example, when most of us say ‘2’ and one person says ‘8’, the person who said ‘8’ enlightens the team on the complex caching required to meet the performance requirements listed as an Acceptance test. This is especially important if your client is the one with the highest estimate. Don't ignore it.
3. Lastly, I attended a virtual class on agile estimating that suggested another technique. For every feature or story, categorize the requirements certainty as high, medium and low. Keep challenging your client until the requirements certainty on each story is 'low'.
I'd be interested in other techniques you may be using to keep the initial requirements gathering phase light weight, yet complete. I think as an industry we are getting better at embracing the changes that are inevitable on all projects, but our clients still require us to have a good understanding of the known scope and the resulting estimate before starting the project.
Wednesday, February 24, 2010
Planning Poker and Buckets of Hockey Pucks
The teams I've been working with over the past while have been using planning poker for project estimating. Despite initial and fleeting skepticism by a few when we bring out the cards, as a whole our teams and our sponsors are finding value in this approach. I was reminded today that we should look at poker points as buckets of sand. That is, when deciding between a 5 and an 8, if something is a 6, you can probably still put it into a size 5 bucket if you think of the points as sand that can be heaped at the top of the bucket. Also, a 7 would overflow a size 5 bucket but would fit easily into a size 8.
In light of Canada's 7-3 victory over Russia in Olympic quarter finals, I've decided to change the metaphor to buckets of hockey pucks instead of sand. Go Canada!
P.S. I'll be presenting on Planning Poker in Regina in June at http://www.prairiedevcon.com/
In light of Canada's 7-3 victory over Russia in Olympic quarter finals, I've decided to change the metaphor to buckets of hockey pucks instead of sand. Go Canada!
P.S. I'll be presenting on Planning Poker in Regina in June at http://www.prairiedevcon.com/
Wednesday, February 17, 2010
FitNesse gets the gold!
I've been using FitNesse for just over 3 weeks and I am pleased with the value it is adding to our project even though I'm only using the basic features at this time. As a former developer I'm finding it fun to use because I have to write a little bit of code in order to create an interface between FitNesse and each service that I'm testing. Most of our testing on the project was going well and we were avoiding major errors - until last Thursday...
A change in the code from a newly completed work item resulted in 81 of our 191 tests failing. Imagine how long it would take to re-run all 191 tests manually to find out that 81 had failed. Imagine how long it would take to re-test all 191 tests manually to make sure they were fixed. As you can see from the image below, it took us 39 minutes to find, fix and re-test all 191 tests. Thanks FitNesse.
** Update 6/28: This project had the highest quality metric that I've ever been a part of in terms of defects / month / developer. FitNesse was a big part of that success.
A change in the code from a newly completed work item resulted in 81 of our 191 tests failing. Imagine how long it would take to re-run all 191 tests manually to find out that 81 had failed. Imagine how long it would take to re-test all 191 tests manually to make sure they were fixed. As you can see from the image below, it took us 39 minutes to find, fix and re-test all 191 tests. Thanks FitNesse.
** Update 6/28: This project had the highest quality metric that I've ever been a part of in terms of defects / month / developer. FitNesse was a big part of that success.
Wednesday, January 27, 2010
Fitnesse with C#
I've decided to try out Fitnesse on my current project so I attempted to find a complete tutorial on fitnesse with C# today. The best tutorial I found is contained in the following 2 links:
1. Installing FitNesse
2. Writing your first Hello World test
This is a great tutorial. Although in the end I figured everything out, there are a few steps that were implied in the tutorial that I missed on the initial pass. Here are those steps:
a. Leave the command line open. You need to execute this command line everytime you want to use Fitnesse. If you close the command line, Fitnesse will not work.
b. When downloading the FitSharp binaries from github, place the contents of the zip file in a "dotnet2" folder that you create as a subfolder in the folder that you downloaded/installed the .jar file to.
After adding those 2 steps, everything worked great. Thanks to Gojko for all the work you put into the tutorial at the links above. I found the Fitnesse User Guide to be lacking.
1. Installing FitNesse
2. Writing your first Hello World test
This is a great tutorial. Although in the end I figured everything out, there are a few steps that were implied in the tutorial that I missed on the initial pass. Here are those steps:
a. Leave the command line open. You need to execute this command line everytime you want to use Fitnesse. If you close the command line, Fitnesse will not work.
b. When downloading the FitSharp binaries from github, place the contents of the zip file in a "dotnet2" folder that you create as a subfolder in the folder that you downloaded/installed the .jar file to.
After adding those 2 steps, everything worked great. Thanks to Gojko for all the work you put into the tutorial at the links above. I found the Fitnesse User Guide to be lacking.
Tuesday, January 26, 2010
Delivering Bad News Early
I was listening to a Controlling Chaos podcast today, and heard this:
Project Team to Executive: "When should we tell you if we have bad news?"
Executive (dumbfounded of a little): "Well, right away of course!"
Project team (Excellent - thanks for giving us permission)
Yes, I think we all realize that it is important to deliver bad news as soon as possible. And the technique above can be useful to gain permission to deliver it early and allow you to understand and negotiate what 'bad news' means. Bad news can be found on all projects regardless of methodology - resource changes, customers who aren't sure what they want, budget risk, schedule risk, etc. The sooner we can identify the bad news and deal with it, the better.
For bad news related to the budget, how do your teams know when you will be over budget or schedule? Traditional project management methodologies use "Earned Value" to measure project progress against a budget. What frustrates me about this method is that until your team has delivered value through working code, your actual earned value is... zero. If requirements or design is complete, what value is that to the business? Does it verify what % complete the project is? Does it verify your estimates? Does it ensure that the business is getting what they asked for? How useful is it to measure the "Earned Value" on a traditional project until that project is complete?
This is my favourite thing about agile. Once your backlog is complete, estimated using relative estimating, and your first iteration is complete with working code you can calculate your initial velocity and compare it to the budget and schedule. After your second, third and future iterations you refine your velocity and with it your cost and schedule. You have delivered value through working and implemented code and you can calculate actual Earned Value based on what you have delivered vs what is remaining in the backlog. In this way, you can verify your project process early and report budget and schedule issues to your executive or sponsor early and more accurately.
Of course, this depends on short iterations, a backlog of user stories based on the INVEST model that is relatively complete, and working to 'done'. More on these at a later date.
Project Team to Executive: "When should we tell you if we have bad news?"
Executive (dumbfounded of a little): "Well, right away of course!"
Project team (Excellent - thanks for giving us permission)
Yes, I think we all realize that it is important to deliver bad news as soon as possible. And the technique above can be useful to gain permission to deliver it early and allow you to understand and negotiate what 'bad news' means. Bad news can be found on all projects regardless of methodology - resource changes, customers who aren't sure what they want, budget risk, schedule risk, etc. The sooner we can identify the bad news and deal with it, the better.
For bad news related to the budget, how do your teams know when you will be over budget or schedule? Traditional project management methodologies use "Earned Value" to measure project progress against a budget. What frustrates me about this method is that until your team has delivered value through working code, your actual earned value is... zero. If requirements or design is complete, what value is that to the business? Does it verify what % complete the project is? Does it verify your estimates? Does it ensure that the business is getting what they asked for? How useful is it to measure the "Earned Value" on a traditional project until that project is complete?
This is my favourite thing about agile. Once your backlog is complete, estimated using relative estimating, and your first iteration is complete with working code you can calculate your initial velocity and compare it to the budget and schedule. After your second, third and future iterations you refine your velocity and with it your cost and schedule. You have delivered value through working and implemented code and you can calculate actual Earned Value based on what you have delivered vs what is remaining in the backlog. In this way, you can verify your project process early and report budget and schedule issues to your executive or sponsor early and more accurately.
Of course, this depends on short iterations, a backlog of user stories based on the INVEST model that is relatively complete, and working to 'done'. More on these at a later date.
Monday, January 11, 2010
Generate index cards from your Excel 2007 backlog
You have captured all your user stories in Excel 2007 and now you want to print them out as index cards. Follow these steps in Word 2007.
First, create the word document and select your data
1. Make sure you have headings in your excel document. Example: User Story, Points, User Story ID
2. Open Word 2007 and create a new document
3. Go to the "Mailings" tab
4. Click "Start Mail Merge" and select "Step by Step Mail Merge Wizard"
5. Click "Select Recipients" and select "Use Existing List"
6. Select your excel file and then the name of the sheet (eg. 'Sheet1$')
Second, setup your index cards
7. In the "Mail Merge" window (usually on the right), select the "Labels" document type and click "Next: Starting document" at the bottom.
8. Click "Label options…" to choose create a custom label size for your index cards
9. Click "New Label…"
10. Use the following settings for 4 index cards per page:
○ Label name: "Index Cards"
○ Top margin: 1.5 cm
○ Side margin: 0.6 cm
○ Vertical pitch: 13 cm
○ Horizontal pitch: 10 cm
○ Page size: Letter (8 1/2 x 11 in)
○ Label height: 12 cm
○ Label width: 9 cm
○ Number across: 2
○ Number down: 2
11. Click OK (Note - you can now re-use the new "Index Cards" label format the next time you print index cards. Look under "Product Number" on the "Label Options" window)
12. Click OK again to close the "Label Options" window
Now we add data to the index cards
13. Click "Next: Select recipients"
14. If you want to select only some of your stories, use the check box column to do so. Click OK.
15. The page should now show four labels, with the first one blank and the other three containing "<>"
16. Click "Next: Arrange your labels"
17. In the blank label, add the static text you want to display. I like to show Story ID, Points and the User Story on the card, but you can display whatever information is relevant for your project and process.
17. To add the fields from your product backlog, place the cursor where your field should go and click "More Items". Select the field you want to add and click "Insert". Repeat for any additional fields
18. Now format your fields (bold, size, positioning etc) within the first label.
19. When you are finished, click the "Update all labels" button to move your formatting to all the index cards.
20. Click "Next: Preview your labels". At this point, you should have four index cards per page populated with your stories.
21. Click "Next: Complete the merge"
22. Now print your index cards! (make sure they are not printing double sided…)
Bonus Tip: Use the "Scotch Glue Stick Restickable Adhesive" to turn your index cards into reusable sticky notes.
Want to receive future blog posts in your inbox? Enter your email address here.
First, create the word document and select your data
1. Make sure you have headings in your excel document. Example: User Story, Points, User Story ID
2. Open Word 2007 and create a new document
3. Go to the "Mailings" tab
4. Click "Start Mail Merge" and select "Step by Step Mail Merge Wizard"
5. Click "Select Recipients" and select "Use Existing List"
6. Select your excel file and then the name of the sheet (eg. 'Sheet1$')
Second, setup your index cards
7. In the "Mail Merge" window (usually on the right), select the "Labels" document type and click "Next: Starting document" at the bottom.
8. Click "Label options…" to choose create a custom label size for your index cards
9. Click "New Label…"
10. Use the following settings for 4 index cards per page:
○ Label name: "Index Cards"
○ Top margin: 1.5 cm
○ Side margin: 0.6 cm
○ Vertical pitch: 13 cm
○ Horizontal pitch: 10 cm
○ Page size: Letter (8 1/2 x 11 in)
○ Label height: 12 cm
○ Label width: 9 cm
○ Number across: 2
○ Number down: 2
11. Click OK (Note - you can now re-use the new "Index Cards" label format the next time you print index cards. Look under "Product Number" on the "Label Options" window)
12. Click OK again to close the "Label Options" window
Now we add data to the index cards
13. Click "Next: Select recipients"
14. If you want to select only some of your stories, use the check box column to do so. Click OK.
15. The page should now show four labels, with the first one blank and the other three containing "<
16. Click "Next: Arrange your labels"
17. In the blank label, add the static text you want to display. I like to show Story ID, Points and the User Story on the card, but you can display whatever information is relevant for your project and process.
17. To add the fields from your product backlog, place the cursor where your field should go and click "More Items". Select the field you want to add and click "Insert". Repeat for any additional fields
18. Now format your fields (bold, size, positioning etc) within the first label.
19. When you are finished, click the "Update all labels" button to move your formatting to all the index cards.
20. Click "Next: Preview your labels". At this point, you should have four index cards per page populated with your stories.
21. Click "Next: Complete the merge"
22. Now print your index cards! (make sure they are not printing double sided…)
Bonus Tip: Use the "Scotch Glue Stick Restickable Adhesive" to turn your index cards into reusable sticky notes.
Want to receive future blog posts in your inbox? Enter your email address here.
Subscribe to:
Posts (Atom)