Page Based Testing

If you’ve used selenium, watir or some other web automation framework you’re probably familiar with the record and playback style of test automation:

  • type this
  • click that
  • verify something

It isn’t very context aware, and you can get lost in the details.  You practically have to execute it (or read the comments) to find out where you are.

Page-based testing is a term I made up (I’m sure I’m not the first to use it) to describe a way of organizing automated tests, that groups components into pages, so you get a little more information, and can do a bit more context validation.  Essentially, you create a bunch of objects that represent the pages on a web site and group elements and actions under them.  Pages can inherit common functionality from a base class, and attributes from a site.  So you have a bit more descriptive code such as this:

  • type ‘bob’  on  mysite.loginpage.username
  • click on mysite.loginpage.submit
  • verify mysite.homepage.title is “Welcome home, Robert”

However, as any good tester can see, this is more work.  On the one hand, it’s great because you can point to how many lines of code you had to write, and thus couldn’t possibly have time to check every possible exception case.  On the other hand, it means more typing.

But with the proper level of abstraction, and some helpers to make generating the code easier.  Unfortunately, it might also make improving coverage easier, but thankfully through automation.

Falsely Responsible

Here’s a great blog post about testers being falsely responsible for a software release.

It’s a situation I’ve seen countless times.  Everyone is breathing down QA’s neck — “Is it ready to ship?”

Is it really our decision to make?  If I am to be held responsible for the product’s possible failure, shouldn’t I be rewarded for it’s success?

Test first development isn’t testing

Test first development isn’t testing.  It’s development.  It’s prep work for a developer.  It doesn’t test anything.  And as a matter of fact, if you could write tests first that work without testing before running them, you’re probably a perfect coder, and writing tests would be superfluous.

Now, running a test after the code is written is testing, but in the simplest possible sense.  In about the same way that spell-checking is editing.   Or rather, you could consider the spell checker the compiler, and the notoriously bad grammar checker equivalent to the also notoriously bad warnings a compiler gives.  But no one considers a document even “proof-read” because Microsoft Word didn’t underline anything.  Actually, a document of any length –say half a page or more– is probably unreadable if there are no squiggly underlines in it.

What would be your own unit tests (or even automated functional tests) that could be done to a document?  Pretty difficult to come up with something useful, isn’t it?

But code is structured, you complain?  Stop complaining, so is English.  As a matter of fact, supposedly your code is just a much longer description of a specification written in English (or Swahili).  It’s longer because computers aren’t as smart as people, and because of that, you have to talk down to them and use lots of little words and a lot more punctuation.

Let me help you out.  There are some things that can be automated.g  Such a word count.  Formatting guidelines.  Header style.  Beginning middle and end.  You could come up with run-on sentence algorithms or text analysis. But the real trick is turning a document into an executable, because that’s what code does.

How are documents executed?  By modifying them.  The aforementioned word count can make sure the document length doesn’t change — or if it does, that the changes occur in an expected area.  Or that formatting changes (margins, fonts, etc.) are changed across the board.

Fairly useless stuff, right?  But that’s all automated tests can do for software.  Granted, there are a lot of variables in software that aren’t in documents, and code changes happen even more.  So there is value in REGRESSION tests being automated.  Because a code change is like a document edit.  You don’t want to edit one paragraph and expect the format (or word count) of another paragraph to change.

Now, I’m not arguing against automated testing (it’s what I do for a living), but I’m arguing in favor of knowing what it can do, so it can do it well.  The biggest travesty in quality assurance is the so called “test first” methodology.  It’s really just a cop out that says you’re not going to test.

Supposedly, the idea is to do the things first that you never have time to finish.  You know, like washing the dishes.  By the time you’ve fixed dinner and eaten it, it’s time for your favorite television show (or online chat game, for you WoWers), and there just isn’t time to do the dishes.

Here’s a novel idea — why not do them before cooking?  Cooking is what takes up all the time, and the cutting and mixing and associated tasks.  Cooking is Very Important.  After all, without the cooks, there’d be nothing to eat.  Therefore, of course, cooks are the most important people in the cooking process, right?  As such, they don’t have time to do the dishes, so you need to find someone else (less important) to do them.  And do them before you start cooking.  Makes sense, right?

Okay, you could test (I mean wash dishes) while cooking.  Every time a cook dirties a dish, a tester (I mean dishwasher) could wash it. Then, everyone could sit down to dinner and all the dishes would be done, and no one would miss Wipeout (or another quest to chop wood or fight squirrels).

Except of course the dishwasher, who’d also miss dinner.  Because while a spoon can get washed after every use, that big pot of goulash can’t get cleaned until after everyone has been served (and served seconds, if the cook is any good.)

And of course, there’s the little problem the cook never though about of all the dinner plates, silverware, and glasses.  Those need washed too.  If only they could be washed first, or iteratively.

Advantages of wikis

The biggest advantage of a wiki is the ease of linking between documents.

The second biggest (and distinguishing) advantage of a wiki is the ability for everyone to edit.

The first advantage, linking, is not the sole domain of a wiki.  But many people (myself included) use wikis specifically for this purpose.  The ability to create (and track) documents through links is the principal power of the web as a whole, but I can think of few applications outside of wikis that make this easy.  I maintain the principal feature of a CMS is the ability to link pages as well, and it is probably the most poorly implemented, as well as the biggest lock-in.

So what makes up this feature?  I think it consists of two main parts: The ability to assume part of a URL, and the ability to tie the page title to that part of the URL that needs specified.

One important feature of a wiki for me is the ability to build a hierarchy of pages, such as tools:testing:bug-tracking:mantis (or tools\testing\bug-tracking\mantis.)  This is namespacing in code, and in wiki’s it is valuable too.  Using composition, you can also include an element in a different hierarchy (namespace) as well, such as projects:open-source:php:mantis.  It is a powerful organizational feature.

A CMS that had a good sitemap generation tool would be advantageous, particularly if pages could then be composed of components in different hierarchies.  By sitemap generation, I mean the ability to create hierarchies of pages as objects that can interact (through links) and be called by aliases or alternate routes.

I mention routes, because I think of Rails’ “routes” mechanism for mapping pages (or servlet-mapping, for java afficionados).  The other feature of wiki URL, and CMSes, and Rails that people are drawn to is pretty URLs, or rather, descriptive URLs.

The other feature of wikis that people like, as I mentioned is the ability for everyone to edit.  It’s also the most trepidatious.  Wikipedia is the hallmark of success for this feature, but a large part of their effort is in fighting corruption, spam, and bias.  It’s the reason something else hasn’t taken it’s place.  For every wikipedia, there are a million overrun, neglected, or out of date wikis (like my own.)

Permissions, captchas, and administrators are the answer to this.  But most wikis die of organizational failure.  The ability to edit pages (and link mapping), and the ability to structure them in hierarchies is critical to the success of a wiki.

The choice to restrict access is a critical one, and paradoxically, is more important (and more harmful) at the point before which a wiki draws the userbase to be self-sustaining.

The question then is, how do you build up the user base to reach a self-sustaining management level in a wiki without it getting overrun or disorganized.  Better spam prevention, approval workfly, and organizational editing may be the answer.

I’d be curious to know what that point is.  I’d guess it would be around 100 dedicated or 1000 casual users.  For a restricted environment (such as an intranet or members only wiki) that number might be as low as 10 (or possible exists in a ratio of perhaps 1:3 dedicated to casual users), where organization and relevance being the battles needing fought, rather than spam.    The ratio may be the key, where spammers would count as some number of casual users.  Clearly also, the administrative tools affect (and should be aimed at lowering) this ratio.

Google docs ate my presentation

It’s tempting.  Just click here and create a presentation.  No need to install powerpoint (or open office).  But don’t do it.

I was about 3 hours work into a presentation, and Google docs decided to crash.  It saved regularly, so I felt comforted, but guess what?  My presentation doesn’t exist anywhere except as a broken link in Google’s system.

Not only was it the presentation, but about a half page of commentary on each of 14 slides.

Ideas for testing articles

Here are some articles/blog posts kicking around in my head that I thought I’d write down.

  • 3 types of tests (validation, regression, exploratory)
  • Focussed exploratory testing (and session based testing)
  • Web automation tools (Selenium, Watir, Canoo, Mechanize)
  • Page based testing framework (page elements and IDE completion)
  • Introduction to Selenium with STIQ (SolutionTestIQ)
  • The QA stack (version control, documentation and requirements, task management, bug tracking, builds, code analysis, continuous integration, deployment, test framework, automation — examples: subversion, xwiki, projectpier, bugzilla, ant, cobertura, cruisecontrol, capistrano, junit, selenium )
  • Bugs, Tasks Tickets, and Features
  • Bugs vs. Defects (one bugs you, the other doesn’t match a requirement)
  • Triage – taming the bug database (include backlog review)
  • Grouping tests (with tags)
  • wiki test cases
  • test consulting survey (would you hire an independent consultant?)
  • testing and domain knowledge (why a consultant with a recurring part time relationship)
  • Version Control Systems – one of the most important QA tools (it allows you to break things)
  • Archetypical prospective customers
  • Scrum and Standup meetings

If someone asks for one of them, I’ll get to work on it. Otherwise, I’ll work through the list and add/subtract as I feel.

Ten commandments for business failure

I was at the library the other day looking for books about how to start a business and consulting and stuff like that, and Kelsey found this book:

The Ten Commandments for Business Failure” by Donald Keough.

I thought I might as well figure out how to do it right, so I added the book to the stack.

Don was the president of Coca-Cola in the 1980s and is probably most famous for heading the company while Coke was losing the cola wars to Pepsi, and especially for the New Coke debacle (that some say was a just brilliant marketing ploy.) The reintroduction of Coke Classic was probably the tipping point of the reversal, to where Coca-Cola once again enjoys a dominant position worldwide in the fizzy, sweet, caffeinated beverage business.

It was an interesting read, though the author didn’t dwell too much on New Coke, he does mention it, and accept at least partial blame. The moral of that particular story is “Don’t listen to consultants” — or rather “Listen to consultants if you want to fail.” Good advice, for my clients that is.

The whole book is written with that somewhat gimmicky formula: “Do X if you want to fail”, meaning “don’t do X if you want to succeed.”

It isn’t specifically about his leadership at Coca-Cola, but rather general business advice. The most profound point is probably in the introduction where he says he doesn’t have the formula for success but that he does know 10 rules (actually 11, there’s a bonus chapter) that are almost guaranteed to help you fail.

I don’t want to give away the rest of the commandments, because it might hurt book sales, and the 10 (11) commandments are really just pithy, common sense advice, but worth reiterating. The book is actually quite an enjoyable read, moreso for the interesting, optimistic tone of a successful man who may have made mistakes, but was definitely not a failure.

I thought he was surprisingly “with it” and his advice relevant to the times and up with technology and the current business climate. I particularly enjoyed his chapter on pessimism and how he skewered the Global Warming doomsayers without mentioning it or them by name (except for Paul R. Ehrlich, one of the leaders of the global warming movement, and then only in the context of his malthusian claims as recently as 20 years ago about the impending new ice age.)

Despite a few “Norman Einstein” moments (such as claiming India acquired nukes in the 1970s) and a slight tone of the born privileged, I enjoyed the book and appreciated the advice. I found myself liking and wishing to meet Don Keough long before the end of the book, and not changing that opinion by the time I was done.

Unfortunately, though, I plan on ignoring the 10 commandments, and finding a way to fail on my own merits.