An important part of our Quality Center test management process is tracking which releases and configurations our test cases are executed against. So for example, when we run a test we should record the fact that it was executed on Windows XP with IE9 and executed against version 1 build xyz of the product we’re developing.
What we’re looking at in this session is, firstly defining a release/cycle and linking test sets and runs. Then we look at how we can factor in and link configurations into these runs in various different ways.
With Quality Center we define releases at the top level. These releases track a group of changes we might be making to a product under development. We can also define a cycles within each release. Where a cycle is a logical grouping of activities or tasks for the product we’re working on. We’ll then assign requirements, tests, sets and defects to a particular release and/or cycle so that we can track status and progress.
Whilst tracking against releases can be quite straightforward tracking against configs can be a little more complex. There are several ways to track how we run the same tests on different configs.
1. As part of the test run results where we record OS, build, etc information (and additional information with custom fields if necessary).
2. Specific test set folders linked to the release/cycle, where we create multiple set folders liking all of them back to the release/config.
3. Using test case configurations (covered in a different video).
What we get out of recording this information is that we can see all historical results from the perspective of a configuration. So for example a config that has an excessive number of failures can be identified as an issue with a config. You also get visibility of the test cases that were run against the different versions of a product. So we can identify where a failure surfaced and where that failure was fixed.