One of my pet peeves is working in and with bug tracking tools. I am well aware of some of the arguments for the importance of these tools and I am not trying to address those here. Instead, I'll show you an example of an alternative that I have found useful.
First, using an approach like Specification by Example can reduce the need for bug tracking tools because communication goes up and defect counts go down. But even using this technique, defects still occasionally occur. Here is an example of how to use Specification by Example not only for 'requirements', but also for defects.
In June of this year I spoke at the Prairie Developer's Conference in Regina, Saskatchewan. Some of the speakers and volunteers were involved in creating the website, services, and mobile application for that conference. Since I was doing a Specification By Example talk I decided to use the conference web services to illustrate how easy it is to create your first automated test against a web service using FitNesse. As I was working with the services I found a small defect. Instead of writing up a defect in a bug tracker with the steps to re-produce it, I wrote a test in FitNesse to confirm the defect:
This example calls a service that returns a list of sessions and does a few basic C# calculations on that list. It counts the number of sessions (allSessions.Count) in the conference and FitNesse maps that to the NumberOfSessions variable above. It then counts the number of unique abstracts (allSessions.Abstracts.Distinct.Count) and FitNesse maps that to the Number of Unique Abstracts. FitNesse then compares the numbers and displays the results. In this case, 63 does not match 62 so it displays an error with the expected and actual results as above.
Once the test confirmed the defect, I simply communicated the failing test to the developers. When the developers reviewed the test they could clearly see what the problem was. No back and forth was required to understand the issue or to confirm the steps required to reproduce it. No one had to set the status of the defect to "working", "fixed", "duplicate", "resolved", "more information required", or anything else. One of the developers fixed the issue and even added an additional service that we could call to address the root cause - Are Session Abstracts Unique? I added the new test, ran all my tests again and was pleased to see them all go green.
This process improved communication between tester and developer, ensured that the defect would always be tested and re-tested for, and kept us from spending unnecessary time in a bug tracking tool.
Hi,
ReplyDeleteThis is very interesting, but I wonder about a detail: if the problem can't be fixed right away and we don't want to have failing tests in the main build, then the failing test must be "stored" somewhere. One place could be the branch where it should be fixed, which feels ok.
When there are more than a few such branches, then someone must decide priorities and all that jazz - my feeling is that a bug tracker offers a way to manage the pending issues, especially for a product owner that might not be very technical.
Am I missing something?
best regards,
Vlad
Great questions. I expected questions like that and hoped this statement would cover them: "I am well aware of some of the arguments for the importance of these tools and I am not trying to address those here" ;) You have now called my bluff!
ReplyDeleteSo, here are some thoughts:
Within an agile project, teams strive to build perfect on top of perfect so it isn't too often that we find a problem that can't or shouldn't be fixed right away. However, this isn't always the case. You've already suggested one option with branching. Other options are to mark your non-important failing tests with an "ignore" tag, put those tests in a separate project or FitNesse page, or capture them in a tool of some kind. Hopefully you have few enough of these that this doesn't become an issue - once again the Specification By Example approach can help teams move in this direction.
Being a product owner that isn't very technical is also a concern when considering this approach. In that case I'd ask the product owner to work with the development team to create the appropriate tests to demonstrate the error. In this case we are still focusing on high communication (face to face) vs. a tool and documentation.