July 20, 2012
It’s well known that keeping track of results from testcases against builds, configurations, environments, etc can become quite difficult to manage. It’s one of the reasons we usually implement test management tools. These tools enable us to log our results against these different aspects of our testing and then quickly produce traceability reports to show our coverage. It’s just that the permutations and combinations can become quite difficult to track, even with the right tools. If we have just 1 testcase, which we need to run against 2 platforms, 2 operating systems along with three configurations relevant to our application we already have 12 permutations. That’s 12 permutations for just 1 testcase. And with this example we’re not even considering...
Read more...
July 13, 2012
Re-running tests to check that bugs are fixed is an essential part of the test management process. Teams grow, the amount of data generated by completed testcases increases and the number of raised defects expands too. As a result it becomes more difficult to keep track of the retesting. There are several different ways to approach this. Firstly we can identify all failed cases from a previous cycle and just re run those testcases. This approach has one serious flaw in that defects raised outside of the QA effort don’t then get retested. Having said that this is the simplest approach to execute. We talk more about this approach in practice in this managing retests blog post. The second approach...
Read more...
July 4, 2012
In many cases we’ll need our test management tool to aggregate and report on results from various perspectives. For example we may need to aggregate across different projects, application modules, runs or cycles. Whilst this sounds simple in theory, it’s not always so simple in practice. Much depends on how you structure things within your test management tool and how well that tool can cope with aggregating results. Another area of complexity can develop when you want to see what the last run status values are for a group of tests across different releases or builds of a product. We argue against The Aggregated Test Management Results Report in this post. However there are times when this can be useful....
Read more...