Testing, and in particular test automation, are hallmarks of a good agile software development process. The Agile movement is probably the most responsible factor that has led to widespread adoption of test automation.
Software development teams are by-and-large creating better software, faster, with less formal process than they were 10 or 15 years ago. Whether it is because of maturation of the craft can be argued, but most developers pay at least lip service to agile methodologies.
But just because you do testing, even test first development, is your test development agile?
Let’s review the Agile Manifesto and compare your testing practices against it’s values.
Individuals and interactions over processes and tools
When you write tests, do you think of how they will be written, what tools you use, or do you think of who will use the tests first?
Does using a BDD framework like Cucumber or Specflow really involve business in testing? Does it make it easier for developers to communicate and testers to maintain tests?
If the answers are yes, by all means use the tools. If not, why are you using them? Are you using a specific tool or process because you feel that using it makes you agile? Because the author of the tool is an “Agile” guy? Because it’s popular — or is interesting and new?
Working software over comprehensive documentation
Working software, not working tests. Tests are documentation. Tests tell you that your software is working. If tests are slow, hard to maintain, or indeterminate, they hinder your ability to know if your software is working, and should be thrown out.
But automated tests are software too. It’s better to have a few working automated tests, than many broken ones. Target high value areas.
Things that are repetitive, easily automateable, slow or error prone when performed manually, or likely to uncover bugs are worth automating. Good automated tests should fall into at least 3 of these categories.
One of the appeals of agile is self-documenting code. Tests are a part of that. Code should describe business cases (with some extra punctuation). Tests should also be readable — and understandable — by business users.
Customer collaboration over contract negotiation
Tests are not meant to be comprehensive requirements documents or technical specifications. They are not a contract of whether code is “complete” or “done” or “quality”. They are information. Meant to be shared.
Code should not be thrown over the wall to be tested. Testing should not happen after development. Tests should not interpret requirement.
Test cases should make sense to all parties involved (business and development). Business should be involved in tests. They should be able to understand what tests can and cannot do (and what they do well and poorly). They should be able to run tests, suggest new tests, and read test reports. A test report should be something useful, not throwaway documentation or CYA.
Responding to change over following a plan
Test suites tend to get crufty. Part of that is how they are written. If there is a master excel spreadsheet, or a tangle of spaghetti inside unit tests, you’re not going to want to change them. You should write a test how you want it to read. It should not expose implementation details. Hide your spaghetti behind meaningful function names. Learn some design patterns.
Test should be succinct and targeted. They should be built of reusable components. There is a lot of repetition in testing, that means a lot of opportunity for modularity.
But don’t overarchitect your tests. Don’t go overboard on your test-factory-builder-closure. Tests are throwaway code. You want to be able to throw away a test when you don’t need it anymore. But you also want to be able to maintain it (and you want your most junior test automation engineer to be able to too.)
How about the principles behind the Agile Manifesto? Let’s take a look, and see where there is overlap and where there might be more practical applications to agile test development.
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Are your tests valuable? They are when they find (or prevent) defects. But if delivery is hindered by slow tests, they lose value.
Are your tests executing with every delivery? Every checkin?
Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
Are your tests brittle? Do they break when requirements break. How easy are they to change? How reliable when changed?
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Do you have to wait until after code is developed to write tests? Do you have to wait for tests to run before knowing if code is broken?
Business people and developers (and testers) must work together daily throughout the project.
Is business involved in testing? Writing tests? Running tests? Do tests reflect acceptance criteria? Is test feedback instantaneous (or as quick as possible)? Does everyone understand what the tests do (and do not do)?
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
Do testers own quality (I hope not). Does everyone feel they are responsible for quality? Do developers and business care about tests?
Are tests limited by environment access, resource allocation, or security constraints? Are you using slow, brittle UI tests because directly accessing the system (e.g. database) is not possible? Because mocking back end services is not feasible?
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Again, is code thrown over the wall for testing? What about requirements? Would a tool written by developers (or an architectural change) make testing easier?
Are you testing what is most important. Get business to describe 3 concrete “needs” for a feature or product, and then have them eliminate one to be willing to ship. This isn’t about sacrificing quality, it’s about establishing priorities.
Working software is the primary measure of progress.
Tests are meant to tell you that the software is working (or broken). Do they accomplish this. Do they spend too much time covering edge cases? Or do they miss important edge cases? Which edge cases are important? And how do you know?
Working tests have value. Non-working tests do not. Indeterminant — or inaccurate — tests have negative value.
Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Do tests slow you down? In even the best teams, around sprint 10 or 15 regression testing starts to become onerous. Technical debt builds up. Are tests becoming technical debt — or are they designed to help prevent it’s buildup.
Some of the most valuable tests never uncover defects. They help refactor code so that no defects appear and changes can be made with confidence.
How about your test code? Can it be easily refactored? Can you swap out a UI implementation for a web service or API implementation? Can you test with mock systems?
Continuous attention to technical excellence and good design enhances agility.
Are your tests spaghetti? Are they refactorable? Are they reusable and modular? Can everyone understand how they work? Tests should be easily understood by your worst developers. They should be understood by your business users.
Simplicity–the art of maximizing the amount of work not done–is essential.
Are your tests valuable? Do they test all permutations when only a few are needed? Is your test framework “magical”? Do you do the simplest thing possible? Remember, the simplest test to implement (and maintain) is the one that is never written. And sometimes, even when a test is necessary, automation is not the answer. Even more, sometimes, it’s easier to simplify the use case or eliminate acceptance criteria than create complex testing (or system) logic.
The best architectures, requirements, and designs emerge from self-organizing teams.
Is testing process dictated from above? Do you have a test tools team that builds libraries that product testers use? Do you enforce test management or automation tools company wide? Can teams choose their own tools and strategies?
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Do you take time to review your testing process? Do you measure it’s success? Do you make changes when needed? Are you afraid to fail? Do you talk about what has worked and what hasn’t?
I’ve tried to cover each point in the Agile Manifesto lightly, though I’m sure there is a lot more that can be said for each topic. And doubtless there are some that are not addressed specifically by the Agile Manifesto.
I encourage testers to think of their tests as a product and apply agile principles and good design to their tests — whether automated or manual. Indeed, I think we should stop thinking of tests as automated or manual, and consider that merely an implementation detail. I think that helps to write better, more meaningful tests.
I’d like to work on a more concise list of 5-10 points to check regularly as an Agile tester and would welcome feedback and suggestions.