Estimation of test automation in an Agile environment
One way to ensure the quality of software, is by testing it. Running tests can be done both manually and automatically. Test automation, with its ups and downs, has been the centre of attention for many years. It is usually underestimated what implementing test automation entails and the impact it has on an organization. Especially the estimation of the required effort. Adding test automation within the entire range of testing measures requires extra human capacity, both for the initial set up and the maintenance of the automatized tests apart from test implementation. The question is how much human capacity is needed in order to test automatize the functionality which has to be tested automatically? This article describes several methods to estimate the required effort for test automation and the approach to collect the required data.
Keywords-Estimation; Test Automation; Agile; Testing; Return on Investment and Future proof.
One way to ensure the quality of software, is by testing it . What we mean by testing software is the following :
“The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they fit for purpose and to detect defects.”
Running tests can be done both manually and automatically. Test automation, with its ups and downs, has been the centre of attention for many years. It is usually underestimated what implementing test automation entails and the impact it has on an organization  . Because of the rising popularity of Agile   and the implementation of continuous deployment and development , it seems that test automation is taking on a fixed position. The most important reason for this is that the amount of work is no longer manageable to be done manually .
Adding test automation within the entire range of testing measures requires extra human capacity, both for the initial set up and the maintenance of the automatized tests apart from test implementation. The question is how much human capacity is needed in order to test automatize the functionality which has to be tested automatically?
Test budgeting has been a problem since the beginning . Several methods have been developed   , but they don’t always produce the correct results. Paragraph 2 will expand on this topic. Practice shows that significantly more time is needed than was budgeted at the start of the project.
The question is how to get a grip on this in order to make reliable predictions concerning the necessary capacity. Based on previous methods   , which are the utilized ways of budgeting within the Agile methodology , three ways of thinking have been developed to budget automatized test capacity. These three ways of thinking are described in this article. However, they still have to be tested in practice.
The fundamental principle in this article is a structural, future proof design of test automation within the Agile developed methodology. That means the following: designing test automation in such a way that developed automatized tests cannot only be executed, reused, and easily transferred to others today, but also in the future, and done in such a way that maintenance effort is minimal.
The paper has the following structure. Section II describes the causes of poor test budgets on behalf of test automation. Section III describes the general elements that affect the required test capacity. Sector IV will give insight into budgeting future proof elements on behalf of test automation. Sector V discusses the three ways of budgeting. A detailed example has been included in Section VI. Section VII describes the collection of the data and the approach to classify into the described estimation methods. Return on investment is dealt with in section VIII. Lastly, section IX describes the conclusions and future work.
II. Causes of poor test automation budgeting
As indicated in the introduction, budgeting within ICT is a common problem . Which causes are at the core of this? A couple of reasons can be found.
Using new development and or programming techniques of which there is not enough knowledge. Not questioning the desired functionality enough which causes new problems to arise during the implementation and test phase. Unfamiliarity with the quality of the software in the beginning is another reason. Furthermore, the quality of the persons involved, such as the tester or developer, plays a role. Is someone sufficiently skilled to make a solid budget? .
What we see in practice is that experience numbers are hardly recorded, if recorded at all. This is especially the case for budgeting test automation. A short research in the ISBSG database  shows that only three projects have been recorded in which Agile development technique is combined with automatized testing. Of only 1 project out of these three, the delivered test effort has been registered. See table 1.
Table 1: Analysis ISBSG-database for the attention of test projects
|#projects||Way of testing||Agile development methodology||Test effort known|
It becomes clear from this analysis that there is no useful data available to draw conclusions.
In order to try to answer the question: “How is budgeting done in an Agile development environment concerning test automation,” a survey has been conducted in which 100 people participated. The results are recorded in table 2.
Table 2: Results survey way of budgeting test automation
|Way of budgeting||Number|
|Percentage available time||1|
|Pokering of the effort||3|
As table 2 shows, no reliable results can be extracted from the survey. Despite a reminder, response was very low. Both the results of the survey and the analysis of the ISBSG database, which show comparable results, were a trigger to keep on thinking of ways how to budget test automation reliably and predictably.
A first draft was made during the Valid2016 conference where the first ideas were drafted during the presentation: “Estimation of test automation in an Agile Environment” in which the following question was discussed: “How to estimate the required effort in an Agile environment regarding test automation” .
During this presentation three ways of thinking were sketched how test automation can be budgeted in an Agile development environment in order to set up test automation in a structured and future proof manner. This article elaborates on the presentation whereby received input has been included in further working out the ways of thinking. Besides the manner of budgeting, each approach has a number of general elements which influence the eventual budget for test automation. These general elements will be elaborated on first.
III. General elements influencing budgeting of test automation
Apart from the required budget to automatize the test scripts, there are several preconditional elements which influence the required budget for test automation. No matter at which level in the organization (project, division or company level)  you wish to implement test automation, you will have to deal with these elements. Dependent on the level at which you would like to implement test automation, the impact on the organization will be bigger. If you focus test automation on company level instead of a project or individual sprint, the involved elements will have a wider impact. The elements can be separated in the so called initial costs and continuity costs. The initial costs are those which you have when setting up and developing test automation for the first time in an organization.
Continuity costs are costs which have to be made after the introduction of test automation in order to maintain and expand (if necessary) test automation. In table 3 the relevant elements are mentioned with an indication how these elements can be measured and a short explanation.
Table 3: Initial and continuity elements test automation
|Component||Element||Unit of measure||Explanation|
Costs per day
|Number of people that are going to work on test automation|
Price per license
|Type and number of test tools to purchase aligned with various development platforms|
Costs per day
|Installation of test tools in the ICT- landscape|
Costs per day
|Choosing a working method: formulate test data requirements, making synthetic test data, scrambling production data to use as test data. Taking privacy into account |
|Education||# days||Number of required educations/courses|
|Frequency of usage||#runs||How often is test automation used?|
|Virtualization||Is virtualization used?|
|Number of integrations with surrounding systems||#integrations
costs per integration
|Which integrations are relevant and how are they mutually dependent?|
|Support which company objectives||n/a (not applicable)||Which strategic objectives have to be supported?|
|Continuity||License costs test tools||Price per license||Annual costs on behalf of the test tools|
|Maintenance test scripts||Modification frequency||Percentage time reserved for maintenance of the test scripts|
|Required training for new employees and new versions of test tools|
|Test tool upgrade||#licenses times updates||Costs linked to purchasing and installing test tool upgrades|
costs per integration
|Expansion and maintenance of integrations surrounding systems|
IV. Budgeting future proof elements
This information is partly delivered by the overarching test procedure on company level. You can think of frameworks for reusability, tooling, test data generation for repeatability and a wiki for setting up the transferability aspect.
The question is which percentage of the required test capacity for test automation has to be reserved for developing future proof automated test scripts?
The following rules of thumb can be applied as shown in table 4.
Table 4: rules of thumb on determining future proof factor
|Reusability||H (= High)||1,2|
|M (= Medium)||1|
|L (= Low)||0,8|
|Repeatability||H (= High)||1,2|
|M (= Medium)||1|
|L (= Low)||0,8|
|Transferability||H (= High)||1,2|
|M (= Medium)||1|
|L (= Low)||0,8|
An example: 100 hours have been calculated for test automation. In order to set up future proof test automation, the following values have been agreed upon (see below). Determining these values is done in accordance with the principal and is linked to a company’s objectives.
The number of hours required to set up this part future proof will be: (100 x 1,2) x1 x 1 = 120 hours.
Another aspect to take into consideration is the scope of test automation. For which test type  are the test automation scrips developed? A sprint, an integration test (IT) or a chain test (CT)? The bigger the scope, the more synchronization with all parties becomes necessary. Think of which test data to use, which test scripts, availability of test environments for example and analysis of the results  .
You can introduce another factor namely the test type with the following parameters:
Say you would like to do for example a chain test automation. The necessary effort, based on the table above would be:(120 x 1,5) = 180 hours.
As stated, these are rules of thumb, which will have to be tested and adjusted by collecting data from yet to be executed case studies.
V. Budgeting test automation
In previous paragraphs it has been discussed both which general elements influence the test automation budget and that making test automation future proof also impacts the budget.
So how do you really budget test automation?
The following methods of budgeting will be elaborated on:
- Percentage of the available time;
- Pokering the required effort;
- Pokering the required effort in combination with being 1 sprint behind.
A. Percentage of the available time
This method uses reserving a percentage of the total available test time for test automation as a starting point. A frequently used percentage, distilled from various projects, is 20% of the available test time. Suppose that for 100 hours of testing time 20 hours are used to do test automation. A part of these 20 hours is then used to make the test automation future proof.
By monitoring the actually needed capacity during each sprint, a realistic percentage can be established eventually. The velocity  becomes more and more accurate. The question is how reliable such a number is? Does a fixed number allow you to automatize everything that has to be automatized?
The risk is that in for example a sprint, not everything can be tested automatically, since the amount of work requires more time than can be realized in the time that is available. A debt is build up which either has to be removed during a next sprint, or in order to finish the amount of work scaling up is done. One of Agile’s features is a shared team effort. Developers can support in test automation but this will be at the expense of other work which puts pressure on the velocity and leads to not being able to realize all the selected product backlog items.
The way of budgeting, as described here, is a very basic way of budgeting which begs the question of how much functionality can be tested automatically given the framework conditions. The advantage is that you always know how much time is available for test automation.
B. Pokering the required effort
Another way of budgeting is applying poker planning  specific for test automation. Perhaps initially this might seem like reserving a percentage of time. Initially. Pokering the effort uses the brain power of the entire team to reach an actual estimation of the required time.
By placing the items which qualify for test automation on the product backlog, insight will be given into the amount of test work which has to be automatized. Pokering items also provides insight into whether the amount of work fits the current sprint. If the necessary effort is large one can decide to develop less functionality so that developers can assist in developing the necessary test automation.
This way of budgeting has some caveats to take into consideration. Is the to be test automatized item on the backlog of sufficient depth to determine the scope properly? The second observation has to do with the stability of the features for which test automaton has to applied. Is the team only capable of automatizing the test on unit level or also all the features itself?
C. Pokering the required effort in combination with being 1 sprint behind
To obtain a larger predictability of the work that has to be done, you can choose to start the test automation in the next sprint using the version of software which was produced in the previous sprint.
This approach has a number of advantages. The software which qualifies for test automation has reached a level of stability which makes it suitable for test automation. Another major benefit is that more detailed information is available with regard to functionality. After all, software has already been produced, which makes it easier to determine which effort is necessary to automatize the tests. Counting the number of functions goes back to the method of budgeting as applied within the TestFrame methodology .
This way of budgeting overlooks an important Agile principle namely the fact that working software has to be produced at all times. As a team you cannot guarantee that all software from for example a sprint works, simply because you can no longer test everything manually.
VI. a developed example
To give an idea of the various elements influence on the needed capacity, a fictive example has been developed.
|Activity||Required capacity in hours||Calculating factor|
|Way of budgeting: 1 (fixed percentage)||200|
|Scope test automation: chain test||1,5|
|Relevant general elements
|4 persons 1 day €1000, — per day
4 licenses €1000, — per year
|Required capacity||200 x 1,2 x 1,2 x 0,8) x 1,5 x €75||€25.920|
|Training costs||((4 x 8) x €75) + 1000||€3.400|
|License costs||4 x €1000||€4000|
VII. Collection of the data
In the previous chapters a few methods are described to estimate test automation in an Agile context. Till now there is no real evidence which method is the best. A first attempt was made as described in section II. To verify the described estimation method a new survey will be set up tot collect the data based on table 5.
Table 5: collection of the data
|Project||Type of Initial cost||Prio||Continuity costs||SDLC||Estimation method||Estimated hours||Actual hours|
The major problem with the first survey was the timeframe. It was to short to collect data from different customers and projects. The new survey will take place during a period of two years to collect the required data as a base for a proper analysis. At least the data of 100 projects will be collected.
VIII. Return on investment
Initially, test automation costs money. As indicated, various elements have to be put in place before test automation can really be applied. The question is when the required investment will be recouped. Tied to this question is the question: what you will earn exactly? Soon thoughts will go to quantitative aspects. However, when it comes to return on investment (ROI) qualitative aspects also play a part. Table 6 describes a number of aspects that show how you can recoup the investment.
Table 6; aspects relevant for the ROI
|Shortening test execution time ||Manual execution has been replaced by test tools by means of which test automation can be executed in so called off-peak hours. Besides this, a test tool is many times faster than a human being.|
|Prevention of regression||Because of the acceleration in test execution it has become easier to execute all automatized test scripts. Insight into possible regression can be gotten quickly.|
|Impact analysis in case of modifications||By executing automatized test scripts in the first sprint, insight into the suggested modifications can be gotten quickly. This can be especially beneficial in a Devops environment.|
|Time to market||By raising the test execution power, the company can enter the market much faster than its competitors.|
|Independence||By automatizing the functionality, a company becomes less dependent on a few functional experts. This expertise can be used in other parts or parts of which automation is not useful.|
|Reliability in the execution||The execution of the automatized test always happens in the exact same way. This provided insight into the stability of the software.|
|Uniform way of reporting||The test tool generates reports. These describe in detail what happened during the test execution. This makes it easier to track faults and takes less time.|
|Quality to market||The accuracy of tests gives a good insight into the quality and stability of the information system.|
IX. Conclusions and future work
This article describes three ways of thinking on how to budget future proof test automation in an Agile environment. These ways of thinking came into being because the existing methodology did not support a reliable budget sufficiently. They will have to be tried and tested in practice by means of a case study. Data has to be collected in order to eventually develop a balanced way of budgeting. Lastly, the article describes the benefits of applying test automation in an organization.
I would like to thank Marcel Mersie and Danny Greehorst for their contribution and giving me some of their precious time. Furthermore, I would like to thank the companies where I could set up test automation and develop the in this article mentioned three ways of thinking.
- van Veenendaal, ”The Testing Practitioner, hoofdstuk 1,” UTN, 2002.
- van Veenendaal, M. Posthuma ”Testwoordenboek, blz. 112,” UTN, mei 2010.
- Fewster, “Common mistakes in Test Automation, “ www.cm.techwell.com, 2001.
- Hoogendoorn, “Dit is Agile,” Pearson, 2012.
- Blazemater, “The advantages of Manual vs Automated Testing,” blazemater.com, 2015.
- vd Burgt, I. Pinkster, “Succesvol testmanagement, een integrale aanpak, blz 110-111,” tenHagenStam, 2003.
- Greefhorst, M. Mersie, J. van Rooyen, “Principes van testautomatisering,” Computable, 2015.
- DJ de Groot, “Testgoal,” SDU, 2008.
- vd Laar, “Technieken voor plannen en begroten van test projecten,” Testnet voorjaarsevent, 2009.
- Karlesky, M vd Voord, “Agile projectmanagement,” Embedded systems conference Boston, 2008.
- ISBSG, “ISBSG database,”.
- van Rooyen, “effort estimation test automation in an Agile environment,” Valid2016, 2016.
- improvement-services.nl, “term velocity”.
- European Union, “GDPR, ” 2016.
- Schotanus et al, “Testframe, hoofdstuk 6,7,8,” Academic Service, 2008.
- Siteur, “Automate your testing, sleep while you are working, blz 143-144,” Academic Service, 2005.
- improvement-services.nl, “term pokerplanning”.
- Beck et all, “Agile manifesto,” www.agilemanifesto.org, 2001.
- Bach, “Test Automation Snake Oil,” www.satisfice.com, 1999.
- Huijgens, “Agile werkt,” Academic Service, 2012.
- Guru99, “Automation Testing for Agile methodology,” guru99.com, 2017.
- M. Fowler, “Continuous Integration,” 2006.
Heb je naar aanleiding van dit artikel vragen?
Heb je een vraag of wil je met ons sparren? Neem gerust contact op met Mark. Hij denkt graag met je mee!