Pick a tool for a Picky App.

blog-entry
23/09/2016

WAdminIT

Comment(s): 0

Pick a tool for a Picky App

Pick a tool for a picky app – Functional test automation is on everyone’s list these days for any software development implementation, especially when using Agile practices. We all want to increase quality in our software product by testing earlier and more often so what’s the best functional test tool for you?

There are many factors to consider when selecting an appropriate test automation tool from corporate technical direction, current inventory of tools, worker skills and more. It’s often hard enough to manage these dimensions, now add the challenge of poor test tool recognition of your application under test. How does this change your test strategy or the tool you select? Each tool selection opportunity is different with various aspects to include in your selection criteria; we’ll discuss a few common areas of focus for any test tool assessment and selection. Let’s review selecting a test tool for a poorly recognized application.

Vitals:

-Windows environment with a 25+ year old client-server application
-Hodge-podge of new (.Net) and old (C++, etc.) technologies within an antiquated container that hides most of the usual properties used to recognize controls on the screen
-Application is being rewritten now but will take about 3-years, so automating the old system has value
-Adopting Scrum practices with a desire to automate functional testing within the sprints
-No test automation test tools currently used

First things first, define objectives – our key objectives for the test tool are:

-Ease of use – no test automation code presented to the user
-Recognizes most of the 25-year old application (as much as possible)
-Easy maintenance
-Needs to provide value in the first 3-months

Ease of use is an obvious objective but we are also interested in bringing testing closer to the business so the tool must work at a higher level of abstraction from the test automation code. Non-technical personnel will be authoring test scripts so the tool must have a framework that shields the tester from code. An added benefit of using a framework like this is that by documenting your test case using the tool you are creating test automation. Of course we need the tool to recognize the application we want to test, which is a real challenge when the technology used to develop it is very old and does not have testability in mind when it was originally written. We realize that no tool in the market can recognize all controls in the application so our lives will include workarounds.

Navigate the sea of tools

There are many functional automation tools in the market, major players like HP and IBM to niche players and visionaries that promise to out test the others. Now that we have our key objectives set, we can leverage them during our review of tools and reduce the number of tools from 20 to about 3. What we found interesting during our review was with so many applications going web-based, many tool vendors have taken out or reduced client-server support. The 3 tools remaining all fit our list of objectives with each of them having a slightly different approach to achieving them. Even one of the majors made it on the list but was soon eliminated as it used a 3rd party framework, which added to the cost. This would be a good solution for most companies since they usually already have this popular tool but we don’t.

Finalists:

With only two tools left on the list we measured them against our list of 73 detailed criteria with areas of interest like:

-Ease of use
-Licensing – flexible to support multi-national locations
-Extensibility – able to extend and customize capabilities of the tool
-Recognition of the application – list of all known controls in the application
-Best workarounds to accomplish contiguous automated testing – end -to-end testing
-Integration to other tools – continuous integration
-Platforms supported – OS, environments, Citrix
-Capabilities of the tool
-Company
-Support
-Metrics

Onsite proof-of-concept (POC) visits were scheduled for the tool vendors to come in and show us how well their tool worked on the 25-year old application. One of the tools offered a downloadable trial with a tutorial guide to get a head start in familiarizing basic functionality. This can sometimes backfire as self-learning without proper instruction can lead to a bad experience and unfairly rule out the tool for selection.

Both vendors provided a successful POC as they had about the same level of recognition of the old application. Ease of use was similar as well although their user interface presentation approaches were different. With no clear cut winner, we added weighting from each of the 3 evaluators to the criteria from 1 to 5 in order to score the tools and see if we can determine the better tool. We focused on criteria weighting that had conflicts, e.g. 1 evaluator scored a particular criteria a 1 while another scored the same criteria a 5 as these areas would highlight differences. Unfortunately the weighting was not fruitful as it didn’t change anything since both tools either did or did not perform or include that criteria in their offering, negating the weighting approach.

After reviewing the criteria that scored high (5, most important), it was apparent that recognition of our old application was the area that we should target. Since both tools had the same level of recognition, we looked at how we would accomplish automation in the areas where the tool did not recognize the controls of the screen well. This began the Battle of the Workarounds -our target, recognizing and functionally validating the areas of the application that are challenges for the tool. What does this mean?…when presented with an area of poor recognition, we used “send-keys” for example to leverage keystroke equivalents to navigate functionality. Verification techniques used image capture and compare or OCR, which adds overhead to maintenance. This technique relies on pixels on the screen for identification and comparison so differing screen configurations and settings can produce false negative results. The evaluation approach seemed like irrational behavior as tool selection is now based on which one has the greatest non-best practice method of use…but it did ferret out a winner.

Real Users – our winner was also validated by our sessions with the testers that would be actually using the tool. We wanted to make sure that our selection activities included “field-testing” so the decision was not made in a vacuum. Our goal was to introduce the tool to them and get a sense of their initial perception for ease of use and asked them to think about how the tool would integrate into their testing. Obviously our testers were not properly trained but we walked them through the capabilities of each tool and authored a test script together. Each of the sessions with our guinea pigs lasted about 4-hours, allocating 2-hours for each tool. The consensus was unanimous from the testers as they agreed with our selection. By the way, we stayed neutral during our field testing sessions and did not influence their thoughts.

We were very pleased that we were able to identify the best test tool given our objectives and constraints. Another important factor worth mentioning is the support and overall experience provided by the two tool vendors during our selection process. The vendor of the tool selected was extremely responsive and even threw in a month’s worth of free consulting time to help increase the tools ability to recognize our 25-year old application.

Our tool selection journey took about 4-weeks to complete not including holiday’s since our activity took place from mid-December through January. For any tool evaluation and selection, here are the key areas of focus to help narrow the field of tools and hone in on the best functional test automation tool for you:

-Define Objectives to make sure the tool is aligned to strategic corporate desires
-Define Evaluation Criteria to detail tool capability areas of interest
-Weight Evaluation Criteria and Score Results to help determine the best tool for you
-Battle of the Workarounds (concentrate the evaluation around the tool capabilities that are most important to you)
-Field Testing to begin introducing tools in the final round of selection and get an opinion on ease of use

I hope this post was useful in providing basic guidance to your test automation tool evaluation and selection process with an example of a challenging effort.

Wondering which tool was selected? Contact us: sales@arcsona.com or leave us a comment below!

comments

Leave a Reply