Click here to Skip to main content
15,867,704 members
Articles / DevOps / Testing

Lessons Learned in Automated Testing

Rate me:
Please Sign up or sign in to vote.
4.50/5 (2 votes)
21 Jul 2014CPOL5 min read 7.9K   5  
Here are some lessons that I learned in automated testing

I want to discuss some takeaways from my role as a Quality Assurance (QA) Software Developer. My experiences in QA were two-fold: I started as a Software Engineer responsible for QA on a Scrum team and I later had a role as a QA Engineer responsible for implementing automated testing infrastructure. In the latter position, I made sure teams had no issues that blocked them from writing automated tests.

These two roles gave me a lot of insight into the challenges that teams face when testing code with automation. Below, I share a few of the common challenges Agile teams face when trying to write automated tests and ways that I believe these challenges can be approached.

Challenge 0- No Time

When deadlines loomed in a sprint and the burn-down chart stayed flat, one of the first tasks that suffered was the quality and completeness of the automated tests. This problem was further exacerbated when product owners and the Scrum Master accepted incomplete automation because doing more automated tests would cause a User Story to be carried over.

Solution 0 - Budget time for automated testing.

A team must include the writing and implementation of automated tests in their development schedules. It is also critical that automation is not seen as something that can be dumped halfway through a sprint when schedules get tight. Every stakeholder must be committed to utilizing automated tests and see them as an essential part of feature completeness.

Challenge 1- Who is Going to Fix the Broken Tests?

The great part about having a lot of code coverage with automated tests is that when changes are made, the automated tests will fail. However, when teams add new features that break tests, there can be frustration around who is responsible for fixing the automated tests.

Solution 1 - It is your job to fix your tests and your job to not break others’ tests.

In a perfect world, your team’s automated tests are well-written and specific enough that a change in another team’s code should not break your tests. Even well-written tests can not always be insulated from changes because a test could simply have a dependency on a specification that has changed. A frequent issue I came across was that programmers would become frustrated when they felt that “Team X” was always breaking their tests. I think the best way to resolve these situations is through better communication between and among teams, as well as more responsibility on behalf of individual teams to act when their changes cause cascading issues.

Challenge 2- Noise in the Test Environments

When an automated test run occurs with tests constantly (or randomly) failing, developers quickly learn to ignore the results. Understandably, it can be time consuming to look at every failure if you have many tests and test runs. When developers make a habit of ignoring noisy test failures, they can often miss actual bugs in the code.

Solution 2- Get rid of the noise.

There are two common problems that need to be addressed when removing noise. The first is to assess and solve infrastructure issues that cause failures (when the failures are not the tests’ fault). As part of a test infrastructure team, I had to resolve a number of situations where tests would interact with each other poorly or tests would make incorrect assumptions about the state of the test environment. Secondly, it is important to address the behavior of a team when they have failing tests. Teams should addresses a failing test with a “stop the line” mentality. The easiest way for a product owner to enforce this desired behavior is by not accepting User Stories from a team when they have failing tests. While this is often seen as a harsh policy, I believe it is an effective way to re-enforce good behavior in response to failing tests.

Challenge 3- This Test Case is Impossible to Automate

I would often hear that a particular functional area of the code base was impossible to automate.

Solution 3 - Fix the code.

Rarely is it true that a feature is impossible to automate, but sometimes implementing automated tests is inordinately difficult. As much as I crusade for complete automation coverage, I also understand the realities of software development.

For example, in some cases, the time it takes to test a particular feature can outweigh (by a large margin) the value of those tests. I do think it is important that teams look carefully at areas of code that seem too difficult to write automated tests for and ask the following questions: are there any tools or testing infrastructure changes that could help automate tests?; can I re-factor the code to make automated testing easier?; in the future, how can I avoid writing code that is so difficult to automate?

Challenge 4- The User Interface (UI) Tests are Always Broken

As I addressed in Challenge 2, constantly failing tests can cause apathy to grow among developers. Tests that frequently fail are automated UI tests and, in my experience, are commonly fragile and often not specific enough to qualify as good automated tests. Moreover, many developers lack expertise with UI automation tools.

Solution 4 - More training and sparing use of automated UI tests.

The Agile test pyramid tells us that automated UI testing should be used sparingly. Limiting automated UI tests is the simplest way avoid the frustration they cause. This requires that teams be disciplined in other areas of automation testing so that very little testing of the UI is required. It is also important that developers are well trained in the creation of the automated UI tests, otherwise they may choose to avoid creating them even when they are appropriate.

In Conclusion

I hope these lessons provide some insights into your current issues with automated testing and that they help provide direction as your teams increase the use of automated testing. In the comments, please share the most common challenges you have faced with automated testing and how you solved them.

– Josh Robinson, asktheteam@keyholesoftware.com

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Keyhole Software
United States United States
Keyhole is a software development and consulting firm with a tight-knit technical team. We work primarily with Java, .NET, and Mobile technologies, specializing in application development. We love the challenge that comes in consulting and blog often regarding some of the technical situations and technologies we face. Kansas City, St. Louis and Chicago.
This is a Organisation

3 members

Comments and Discussions

 
-- There are no messages in this forum --