In our series on Test Automation at scale, our first session looks at the architecture of a data driven automation framework running GUI front end tests across large numbers of client machines. We walk you through the architecture of a system that minimises script and code complexity. A system that increases reliability and speeds up test execution.
Critical in this setup is the concept of test case execution isolation. The concept of making sure that each test is run in isolation so that one failure doesn’t cause the whole test run to collapse. With the execution of each test isolated it’s easier to identify and investigate failures. This also makes it easier to scale, as more scenarios and more data permutations are added over time.
The key in all of this is isolating the different aspects of the automated testing system. Making sure the right tool is employed to carry out the specific functions demanded by a scalable architecture.
We have test scripts written and maintained in our automation tool of choice (e.g. TestComplete in this setup). Test data managed and maintained in a container that makes it easy to visualise and modify that data. Here we’ve used CSV data files but this could equally be a database repository. We’re also ensuring that all code and data is controlled with a source code control tool (e.g. Git or SVN). This source code control function providing a central repository, not only for automation engineers, but also for our test execution engine. Our test execution engine is the Open Source CI tool Jenkins. In our setup Jenkins is responsible for starting and controlling the execution of each individual test. Each individual test farmed out to a group of lightweight VM machines.
With the test development and execution architecture in place what’s left comes down to collating test results and displaying test results. Also critical is the process of analysing, updating and re-running failed tests. All of this to be covered in the next session where we look at merging/displaying test results, adding new scripts and re-running failed tests.