Test Automation isn’t for everyone

I once knew a guy who was a talented craftsman, he could make beautiful hand crafted furniture. His business became popular and demand increased. He got investment and built a large shop, hired a couple assistants and bought addition machinery so he could help meet the demand.

The business was a big success and grew even more. He was almost able to pay back his loan in just a couple years, but then sold his business (which is still successful today under new owners) and went back to working alone on individual orders. He still does quite well and with the proceeds from the sale has a comfortable life, but nothing like it could have been if he’d kept the company.

It turned out he wasn’t as interested in running a manufacturing business as working with his hands, alone, in his little shop in the woods.

Testers (and their bosses) should consider this before thinking about switching from manual QA to automation. It takes different skills and a different mindset, which can be learned, but may not be what you enjoy.

I happened to enjoy the change myself, but I can sympathize with those who don’t.

Challenges inheriting an existing test framework

This post started as a comment on to the following discussion on LinkedIn.

Inheriting a framework can be challenging.

First of all, because a real world framework is more complex than something just created. There is bound to be exception cases and technical debt included.

Secondly, an existing test framework was built by people on deadlines with limited knowledge. There are bound to be mistakes, hacks, and workarounds. Not to mention input from multiple people with different skillsets and opinions.

In this case, the best way to understand your framework is to understand your tests. Your tests exercise the framework. Pick one of low-moderate complexity and work your way through it, understanding the configuration, data, and architecture.

Don’t be afraid to break things. After all, you have all these tests that will let you know if you broke the framework.

Lastly, having a checklist of things to look for, good and bad practices will help you understand the framework better — and help you know what quality of framework you’re dealing with. Is this mystery function really clever or just a bad idea?

Look for standard patterns. OOP, Page Objects, etc. Also look for common problems – hard coded values, repetitious code, etc.

Test Automation Can’t Catch Everything

I remember a time, years ago, when I was working at a company at which I learned a lot about my craft.

Selenium was fairly new and I was one of the early adopters. I’d developed a pattern for structuring tests that I shared with the community and found that several others had independently developed ideas similar to my own “Page Objects.”

Agile was just beginning to penetrate mainstream development, and was at the same time attracting some healthy skipticism. Pair programming and test driven development were considered “extreme” and other patterns like Scrum were thought of as either common sense or a cargo cult.

Continuous Integration was still a novel idea, although my first job at Microsoft was essentially as part of a manual continuous integration process, no tools yet existed at the time to accomplish it. Now there were open source project like Cruise Control and Hudson (which became Jenkins) coming out.

And while I’d been involved in and an advocate of each of these — Selenium, Agile, and Continuous Integration, I’d yet to see them widely adopted and successfully implemented within an organization.

But about the time I came back from Fiji and stepped back into software develpment, all these things were starting to coalesce. At least they were for me in the mid 2000s.

We had a cross functional team of (mainly) developers, testers, sysadmins, designers, analysts, all working together We had business & customers giving feedback and acceptance criteria. We wrote user stories and assigned points. We wrote tests first and testers paired with developers. We sat with customers and took usability notes. We talked design patterns and had self organizing teams. We had fun and I learned a lot. I still keep in contact with some of my co-workers from more than a dozen years ago.

It was about a year and a half into it that we started to hit the wall. Tests were taking too long to execute. Complexity was slowing us down. We kept plugging along and using technology to fight the friction.

Parallel builds, restructuring tests, virtualized development environments. We fought technical debt with technical solutions and beat it back down.

I thought we were winning. But I had a manager, an old school QA guy, who knew that something was off. I think the rest of us were too busy churning out code, delivering features, spinning our wheels to see it.

But he saw it. He tried to alert upper management, he tried to get through to developers. We had lots of automated tests. We were deliver features every two weeks. Occasionally we’d stop and do a spike to experiment, or take some time off to reduce tech debt, refactor code, or rewrite tests. But we still had decent momentum.

Finally he got a together a group of stakeholders. Give me 15 minutes, he said, and I’ll break it. It took him maybe five. And it wasn’t that hard. And it was something users could be expected to do.

The moral of the story is: You can’t catch everything with automation. While automation is good, it frees up time to do other things, it allows feedback to occur faster, it helps testers to concentrate on finding issues instead of verifying functionality. It helps develpers to think about how they’re going to write their code, to reduce complexity, and to define clean interfaces.

Test Automation is like guard rails. It can help you from falling off the edge, or it can give you something to hold onto as you traverse a tricky slope.

It can catch obvious problems, much like the railing that helps you to not fall off the ledge (you’re not planning to do that anyway.) Anyone can walk along a narrow path and not fall off a cliff, test automation makes it so that you can run. But it won’t stop you from going off if you go past it.

In order for tests to be effective you need to have clear requirements, and then you need to explore beyond them. Test automation isn’t good at exploring. It’s good at adding guard rails to a known path.

And even if you catch all the functional problems, you need to be able to check performance, ensure security, and test usability.

If you have testers churning out tests, both automated exploratory tests, but no one else is involved, or hears their feedback, it’s not going to matter much. It’s like having guard rails and warning signs that everyone just hops over and walks right past. It’s no good finding bugs if they don’t get fixed. And eventually, if you have a bug backlog that’s too big, people will just ignore it.

You need test automation, but you also need exploratory, human testing. And you need everyone — not just testers to be concerned with quality. Quality code, environments, user experience, and requirements.

So use test automation for what it does best — providing quick feedback for repetitive, known functionality. Don’t try to get it to replace a complete quality assurance process.