Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Automation Guild 2017: 8 things testers should know

public://pictures/Matthew-Heusser-Managing-Consultant-Excelon-Development.jpg
Matthew Heusser Managing Consultant, Excelon Development
Lego robots holding hands
 

The first annual Automation Guild conference, held last week, included 31 speakers, all experts in testing automation. The location made the event unique: It was entirely online.

With one day each devoted to frameworks, best practices, techniques, DevOps/testing, and APIs/mobile testing, the conference offered a surprising amount of variety and depth. The practical takeaways included a discussion of tools you can use to test responsive design, bringing the business into defining what is a bug and what is not, and how to improve the quality of automated Selenium tests to encourage reuse instead of copy and paste. I've included more detail on the highlights below.

Considering attending an online conference? The dynamics change in two ways. First, the audience can watch video presentations in any order, including weeks or months after the event. Second, the conferring part, the root word for "conference," runs the risk of going away entirely.

To ensure that collaboration happens, organizer Joe Colantonio split the event into two sections. First, each speaker created a pre-recorded video presentation, ranging from 20 minutes to an hour in length, that participants can watch at any time—weeks before the conference. The live sessions were question and answer follow-ups. The audience could use a chat box or email to ask questions, and speakers responded in realtime. Colantonio curated the questions and moderated the discussion.  

Here are the key takeaways that testing engineers will want to know.

Use visual testing to put decisions back into the hands of the business

Adam Carmi, the CTO of Applitools, was the first presenter at the conference. Applitools serves an extension of UI automation tools, such as Selenium. It takes screen captures between runs and compares them, reporting on errors.

I expected those errors to be reviewed by a tester, but Carni suggested a different approach: Have a business sponsor review and approve (expected change) or reject (bug report) every visual difference each morning. This is a great way to give the business a feel for what is actually changing, get them involved, and put decisions back into the hands of the business In so doing, you can prevent arguments and triage meetings over such things as what is a bug and what is not.

Extend automation inspection to layout and responsive design with Galen

The Galen Framework is an open-source tool and language that lets you specify layout — for example, that a button should be centered and have a width of 20% of the screen resolution. Once you've completed the specification, you can use Galen to run every web page against different resolutions in the cloud. The system might then report back, for example, that the screen fails at 1024 by 768 pixels, and the reason why. With Galen, you can extend testing into visual testing and responsive design.

Screen Capture from galenframework.com.

Testing vendors embrace open source tools

Open-source tools like selenium typically come stand-alone, with a learning curve that's not integrated into the development cycle.  One way vendors add value is to take care of those issues.  After all, why spend six months building your own framework that could end up a legacy mess, when you can purchase a tool designed to schedule, run, improve reporting, handle versioning, hook into continuing integration, and so on?

Most vendors that presented were doing just that, either integrating with open source tools such as Selenium, or making record/playback easier, or providing higher level frameworks to support testing as part of the delivery pipeline.

When automating GUI testing, focus on what's business-critical first

If the goal of automated testing was to cover 100% of what can be tested, your testing would never be finished. When attendees asked the experts at Automation Guild about this, they invariably suggested that users focus on testing their most critical features.

But features are a bit like the federal budget: For every feature, there is a constituency for which that feature is critical. Otherwise, it wouldn’t have been created, would it?

In response, Dave Heafner, a consultant with Arrgyle, suggested a few guidelines for finding critical features. First, he said, ask how the business makes money. In the case of Amazon, for example, that's through search, product pages, and the shopping cart.

Then ask what features your customers actually use, perhaps by searching through the logs. Finally, review which browsers and devices they use. Having those three data points should give you enough information to decide where to automate first.

Legacy framework do-overs: Be mindful not to create another legacy framework

Several Automation Guild attendees complained about the broken legacy frameworks they had been given. They asked how you know when it's time to start over—and how to get permission from management to do so.

The problem here is that the framework is creaky for a reason. "Ugly code," after all, was born out of bug fixes and special conditions. Start over with a clean slate, and you’ll have bugs and special conditions that need fixes, which leads to another legacy system.

Colantonio suggests refactoring your legacy framework, rather than throwing it away. The advice from Dave Heaffner on grading tests and Angie Jones' discussion on patterns for good tests provides a basis to do that refactoring—to have an ideal to move toward.

Go beyond good and bad—grade your Selenium tests

It's easy to complain about tools that drive a graphical user interface (GUI). They tend to be slow,  brittle, create spaghetti code, and so on. People tend to make binary decisions about tests: Either they're  good or bad, and they think that the bad tests are a complete waste.

Instead, Dave Haeffner, a consultant at Arrgyle, suggested grading your tests, with the intention of writing better tests over time. For example, explicit setup and tear-down of the browser should not be directly in the test. He suggests abstracting out locators (-2 points per locator) and hard coded sleeps (-5 points per sleep).

You can use these grading criteria to give each automated test case a score. To improve the tests, don’t throw them away: Score them and refactor to improve the scores. In addition to the conference materials for his session, Haeffner pointed to this screencast that describes his rating process.

When using Page Objects, build your own base class

Angie Jones, consulting automation engineer at LexisNexus, walked the audience through not just creating a test in Selenium, but doing so using her method. Jones' pattern of work has two major components:  PageObjects that expose the functionality of a page in code, and unit tests. Once the project is initialized, Jones creates a BaseTest class, and has all of her tests inherit from it. The BaseTest will, for example, launch the browser and go to a default page during setup, then tear down the browser when the test is complete.

She also uses a standard PageObject base class to create some methods, and a Utils class, that can hold things such as the base URL, user names and passwords, which browser to launch, and so on. All of those little details matter. With a configurable base web page, you can tie tests to a specific server, such as test.companyname.com. When your team wants to use a different test server, that may mean searching and replacing the entire codebase, or worse, making changes individually.

Most recently, Jones wrote for TechBeacon on how to build an agile-friendly test automation framework. Automation Guild presenter Nikolay Advolodkin also discussed how page objects can stabilize your test automation.

Legacy applications abound—and testing tools are getting better

Rosalind Radcliffe, an IBM distinguished engineer, works with clients in modernization, and moving toward DevOps. She also works internally to modernize the company's z Systems mainframes.

During her session, question after question came up about how to do unit and system tests in a z Series environment. She suggested using zUnit, a unit-testing framework for Cobol and PL/1, and turning the old reporting systems into a service, or API, so they can be called like a web page, and can be tested through the web interface.

Some attendees said they were still using classic, "green screen" technologies, because they are fast for data entry. Rosalind's solution for those teams was to use the Rational Functional Tester.

While there are a lot of legacy systems still running out there, that does not mean that you must take a legacy approach to testing.

Toward better test automation

The biggest shakeup for web testing in the past few years, as was clear at Automation Guild, has been responsive design. The Galen  Framework and visual testing tools bring down the combinatorial problem of test tooling, while advice on grading selenium tests and good practices make the checks more stable over time and easier to reuse.

These are my key takeaways from this event. What were yours? If you missed the event and want to know more about any of the presentations discussed above, it's not tool late: You can still register for full access to the presentations.


  

Keep learning

Read more articles about: App Dev & TestingTesting