February 18, 2013
We’re a month or two into 2013 now and with our new years resolutions some way behind us we thought we’d take a look at how you might improve your test management process over the remaining 10 months of the year. So over the next 5 days we’re going to look at 50 improvement ideas that you can consider for implementation. Here’s the first 10 1. Review your reports – one of the main purposes for capturing data in a test management tool is so that you can report on aspects of your projects like progress, resource utilisation and goals. If the reports you’re currently creating don’t trigger changes where necessary then you should start questioning the purpose of those...
Read more...
February 8, 2013
One critical aspect of test management involves completing the loop with defect fixes and source code control. It’s all very well raising bugs as part of the QA process. And it’s good to know that those bugs have been fixed. To be totally in control though you need to know what and how it was fixed. Typically this means linking your issue tracking system into your source code control system. In the example below we look at an example of how to integrate QAComplete/ALMComplete into SubVersion. Specifically we see how you can. raise a bug as part of the QA effort. when developers fix the bug and check the code/fixes in see the traceability of files changed linked to the...
Read more...
January 9, 2013
The implementation of a Test Management or Application Life Cycle (ALM) tool should be approached in the same way as any other IT project. These tools, once setup, will touch on, not only the software engineering team, but operations, project management, customer support and many other aspects of the business. Getting the implementation wrong can be a costly exercise. Not only will you fail to see the expected return on investment but ultimately you’ll impact the quality of the products you’re releasing to your customers. And there are countless examples of companies whose reputations have been adversely affected by bad product releases. Fundamentally you need to approach the implementation of a test management or ALM tool as a project. That...
Read more...
January 3, 2013
Sharing project artifacts across different projects within a test management tool tends to be difficult. For example, you create a project space within your tool of choice, develop lots of test cases and run them. However, you have another project running concurrently and they want to use the same test cases within a separate project space. Try to use the same test cases or artifacts across different projects and typically you end up copying the artifacts to the other projects. The big problem with this approach is that you then have two instances of the same artifact. Update one in one project and the other is no longer a direct copy. Few tools seem to cater for these scenarios very...
Read more...
At some point in the test management process you’ll need to re-test fixes that have been made to resolve issues. This re-test process needs to be approached from two angles. Firstly, identifying failed test cases from previous test runs and then re-running those test cases in a subsequent cycle. Secondly, identifying defects that have been resolved and running tests to confirm that those defects have been fixed. Our test management tool, in this case Quality Center, needs to support us in these endeavors. In this video we look at how QC can be used to pull together a test set which includes test cases that have failed in the past. The video also covers how we can identify fixed defects,...
Read more...
No test team can run every single test case on every single release of a product. So being able to aggregate results from different views or areas in your test management tool is essential when it comes to seeing the whole picture. However, how your test management tool deals with that aggregation might not be quite so simple. This partly depends on how the QA team have set things up and partly on how well the tool you use deals with this. In this video we look at how Quality Center deals with result aggregation. Ultimately we’re looking for accurate reporting that demonstrates the quality of the product at a point in time. The very nature of this aggregation though...
Read more...
Different test management tools use different terminology for the concept of having a library of reusable test cases. With Quality Center the term used is the test plan. Not quite sure why it’s referred to as the test plan as a test plan is far more than just a repository of reusable testcases you’ve written. I suspect it’s just historical so we’ll let that point go. Anyway, in Quality Center terms the test plan is a library of testcases. Those testcases reside in the test plan area and can be used in sets. Those sets are then executed and the associated testcases executed. So you can see that you have two different instances of the testcase…. 1. The instance that...
Read more...
December 31, 2012
Scaling up your test cases effectively usually involves the practice of parametrization. That is, the concept of having one testcase and then feeding in parameter values to create variations of the test case at run time. So essentially you have the same testcase which you execute repeatedly with different data values. Defining parameters in Quality Center is done at the step level where each step is fed parameter values from a grid of values. Each value having a default value, a description and order information associated with it. In this session we look at defining, adding and assigning values to params that are used when executing a run. As with other test management tools there are a number of key...
Read more...
An important part of our Quality Center test management process is tracking which releases and configurations our test cases are executed against. So for example, when we run a test we should record the fact that it was executed on Windows XP with IE9 and executed against version 1 build xyz of the product we’re developing. What we’re looking at in this session is, firstly defining a release/cycle and linking test sets and runs. Then we look at how we can factor in and link configurations into these runs in various different ways. With Quality Center we define releases at the top level. These releases track a group of changes we might be making to a product under development. We...
Read more...
November 16, 2012
Reusable test cases that can be used across different projects can be a difficult to implement and manage. There are some good ways to approach this though. In this article we look at ways to configure your test management tool so that you can effectively handle main products and sub products effectively. As a practical example we’re going to base this on a submarine. Where we’re looking at managing the major release of the submarine, and independently managing the Engine, Periscope and Radar sub products. We’ll look at working separately on these engine, periscope and radar products. Then rolling up work on those to include them in a major release of the submarine. You can see an example of this...
Read more...
Page 3 of 7«12345...»Last »