Select Page

Most automation efforts result in failure. Reasons vary from technical limitations of the automation framework to lack of personnel dedicated to the automation project. People come and go, management initiatives change, and the application under test changes with requirements. Aside from technical issues and people related underpinnings, automation itself is far removed from generating any revenue streams. Yes, even further removed from revenue streams than testing itself and automation tool-smiths cost a lot more money too.

So instead of asking, ‘How do we keep this automation project sailing in the right direction in the midst of wind gusts?’, we should first ask, ‘How do we keep the automation project afloat?’

To stay afloat, the automation project must continue to prove itself worthy of the cost throughout the duration of the project. This entails publishing metrics CONTINUALLY to answer questions like:

  • What is the value we are getting from automation?
  • What is the cost of automation?
  • What is the impact of reduced efforts on automation? What is the impact of discontinuing efforts on automation?

Meaningful dashboards and metrics are the viable solution to answering those questions, the only shot you’ve got at keeping your ship above water. I’ve been in numerous meetings with the same old reply when asked about automation status. Here’s a typical response that management will nod at in front of you, while thinking ‘just get the job done already why don’t you!’:

Ummm, yeah we are still in the process of ummm punting the effort to get the build machine to communicate via TC/PIP with our test servers. This may require IT’s help. We all know how that will go (chuckle chuckle). We have been talking to the global QA team and we are currently awaiting their response on the automation framework design decisions. Once that is kicked into the pipeline, then we’ll be on our way to assuring that we’re all on the same page. We’ve also noticed issues with our source management system but can’t seem to nail down the the issue

C’mon now! We’re engineers – not sales folks (although I wish I had the skills and salary of a bright sales person). We’re not supposed to use the phrases ‘punting the effort’ or ‘kicked into the pipeline’ or ‘nail down the the issue’. Just report your status – are we on track? Yes or No? If not, then why not? Management isn’t gullible – they can read in between the lines and they can tell that you’re speaking their talk. Its all gray and cloudy and there’s nothing solid in your speech even though it sounds good.

I don’t currently use metrics. To me, its analogous to maintaining a workout log. This could explain why my workouts have plateaud and why my body composition isn’t what it used to be. I may try to publish the following metrics. Who knows… I may even be motivated to get some projects complete into fruition so that their results can tap into the metrics generation systems.

* Bug Find Rate – A weekly or biweekly metric for bugs found by automation. This requires tagging new bugs in your issue tracking system as automation related. This may be as simple as appending ‘AUTOMATION’ to the bug summary or even process oriented by creating a field in the issue tracking system that specifies the bug was found using automation.

* Bug Find Rate Spikes – A short term rise in bug find rate attributed to a new form of automated testing that has been deployed. Ask yourself – would the bug have been found (or found as quickly) were it not for the new automated testing?

* New testing – This is more of a qualitative metric that suggests automation is testing areas that manual efforts haven’t been able to touch due to resource or manpower constraints. For example, an automated system could cover thousands of system configurations that would be costly through manual effort. Automation also makes possible the testing of certain input and load conditions (various string length inputs or even the spawning of numerous thread requests for load testing in a certain time period) that would otherwise be impossible manually.

* Delivered solutions – A list of tools created by the automation team that have provided use in the past, are currently in use, or are being developed. When I buy something, I get a receipt that says what I bought. Management wants to see what they’re buying so atleast publish a list via wiki, email, or document management system to let folks know what they’re buying. This could spawn a couple of other metrics like a ‘request list’ or an ‘assignment list’. Remember, test automation needs to undergo the software development lifecycle as well. It must be planned, designed, implemented, and tested prior to delivering to the test team.

* Test cycle time – How long does it take to verify and assure the quality of the product once a candidate build is delivered to QA?

Technical support issue type – If automation goes as planned, technical support shouldn’t get any calls or emails related to basic product functionality. Try to diagnose the type of incoming tech support calls to determine the quality of the product being built? Do your tech support calls concern simple functionality issues or do they concern complex system level configuration issues or load issues? Ultimately, automation should reduce the number of basic functionality related calls being dialed in to tech support.

Quality of metric information – Automation should make it easier to mine test execution results. This should provide with more information and you can then massage this data to provide management with better quality information about the product with the hopes of making better decisions.

Automation, or tooling support, is simply a cost designed to cut costs. This must be justified CONTINUALLY to keep your boat afloat.

references:
Bach, James. “Agile Test Automation”