More on Canoo Web Test and other tools

My last post drew a hostile comment by Marc Guillemot, one of the committers to Canoo Web Test and HTMLUnit.  I may have made some errors, but I am not aware of them.  I think he may have been confused that I mentioned HTMLUnit uses HTTPClient, and assumed I meant that HTTPClient has all the features of HTMLUnit.

I found on his blog a biased comparison of canoo and selenium, that essentially backs up the points I was trying to make.  It seems one of his chief frustrations is people not being aware of the two different ways web applications can be tested, which was the point of my last post.  From what I can tell, his stance is that I’m an idiot, but that he agrees with me.

It’s nice to have company.

Through my own search, I found out about a tool that allows recording of Web Test scripts.  There is in fact a web test recorder and I hope to try it out soon.

On my current projects, I’m committed to selenium, and as most of it’s fans know, it’s more than just a browser record & playback toy.  As Marc and the Canoo company like to quote “Record/Playback is the least cost-effective method of automating test cases.”

I don’t have time or inclination to debate the one true testing tool, but I disagree that the answer is a complex browser-stub, though I commend the Canoo team for their efforts.  I have used and will undoubtedly use Canoo in the future.

The reality is that printing out the HTML is the least complex part of an application’s functionality, followed second by querying the database.  The user interface does in fact play a significant role, and there is often more complexity in the javascript presentation than the remainder of the logic in most business applications (assuming network communications, transactions, queuing, etc. are abstracted in frameworks — things tools like canoo webtest are no better at validating than browser driving tools like selenium and watir.)

There is a place for both types of tools, and I had hoped to have stated that clearly in my last post.

I also learned about some other interesting tools: one that uses HTMLUnit and Jruby: Celerity, which has WATIR-like syntax; and Cubictest which is an eclipse plugin for writing Selenium and Watir tests.

Another interesting idea I found on some mailing list archive (can’t find the link) is to use Selenium IDE to generate  WebTest scripts.


two ways to automate web testing

There are two ways to automate web testing.   The goal is to test the functionality of a web application.  One way is to write automation that drives a browser.   The second is to use a library that imitates a browser session and communicate directly with the server.

Tools that use the first method include open source applications such as Selenium, Watir, and Samie; and commercial products from HP/Mercury, IBM/Rational and Borland/Segue.  There are also two ways to drive the browser.  The way used by Watir, Samie/Pamie, and presumably by the commercial applications, is to use the automation APIs provided by the browser, typically Microsoft COM — which means IE.  Selenium uses a different method, driving the browser through Javascript.  It can use a proxy server to inject javascript code into the browser or a javascript library can be included on the server.

The other method is to cut the browser out of the equation.  Tools such as HTTPUnit, HTMLUnit, Canoo Web Test, WebDriver, and TestMaker use this method.   A similar method is used by other tools including JMeter, LibWWW, and curl.  The obvious disadvantage to using this method is browser compatibility issues.

The HTTPClient library (used by many of the above) is virtually a browser in it’s own right, with it’s own quirks.  It has support for cookies, Javascript, DOM, and HTTPS.  However, WebDriver doesn’t enable this by default.

Tools that use macros to drive a browser would fall into the first category.  Tools that have an “expect” like dialog would fall into the latter.

I usually advocate the first method, because nothing beats having a real browser excercise your application.

The often overlooked advantage of the second method is that you can more easily run browserless tests as integration and smoke tests.  Because it doesn’t have the overhead of the full browser, it is lightweight, and client independent.  It runs faster and can be more easily run concurrently (which makes it useful for performance testing.)  Browser timing issues (and crashes) are less of a problem.

I’d actually recommend type 2 tools for testing links, page flow, content (text/images), and principal functionality.  But if user interface testing is needed, I’d use type 1 tools. However, the truth is that UI and ajax timing related issues are much easier to find with manual testing.  I’d guess 3/4 of what automation buys you can be done with a browserless tool.

The advantage that browser-driven testing buys you is with recording tools.  However, the penalty is with stability and speed of your tests.

Code Coverage tools

Some code coverage (unit test coverage) tools:

EMMA – open source

Cobertura – open source

Clover – Atlassian

Hansel & Gretel – only found info on an IBM developerworks article

Quilt – open source

Jester – open source

Jester takes a very interesting approach. It actually changes to code and then sees if your tests break. For instance it might change: if (x>y) to if (false)

All of the above target Java. What about for other languages like Perl, PHP, Python, and Ruby? What about for Smalltalk?

PHPCoverage – Spikesource

(here is an interesting list of tools for PHP)

rcov – for Ruby

heckle is a Ruby version of Jester.  Link here. Another link to an article on heckle

Of course there’s controversy about the value of code coverage tools, but really it’s an issue of misusing them. They are useful, and they give people something to aim for. A more interesting idea is a “functionality” coverage tool — which would have to be more manually built. An interesting article mentions rspec but that’s not really what I meant, though still an interesting idea.

A requirements coverage matrix shouldn’t be a crutch any more than a code coverage report, but the combination could be powerful.

My first Flex app

It’s just adding a slider control to rotate an image, based on the first example I saw, but the exercise of looking through the API docs and figuring out how to create an event handler helped me understand what was going on and gave me confidence about being able to learn on the fly.

See it here on the Fluffy QA Site: Flash Source

<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="" layout="absolute">
    <mx:Image id="shell" source="@Embed('C:/users/Aaron/Pictures/shell2.png')" height="100" top="225" left="208">
            import mx.controls.Text;

            private function sliderHandler(event:SliderEvent) :void {
                degrees.text = event.value.toString();
                shell.rotation = event.value;       

      <mx:DropShadowFilter />
  <mx:HSlider x="25" y="42" id="slider" minimum="-360" maximum="360" snapInterval="10" tickInterval="360" change="sliderHandler(event)">

  <mx:Label x="208" y="42" text="degrees:" id="lbl_degrees"/>
  <mx:TextInput x="271" y="40" width="45" id="degrees"/>
  <mx:Text x="25" y="10" text="Rotator" fontSize="16"/>

Now I’ll go read the manual.


Tinderbox is the Mozilla project build tool. It’s designed to run builds and tests on multiple environments.

Here are some starter links:

Continuous Integration without a Java application server

I don’t want to have a Java Appserver running on a QA site. It takes too much memory and isn’t stable enough. Especially if it’s running on VM with limited memory and CPU. So I want something that can run as a CGI or FastCGI or apache module (mod_perl, mod_php, mod_python).

So the field is narrowed down quite a bit:


Tinderbox is Mozilla’s build system. What I know about it is your central server outsources the build to a “tinderbox” which I assume can also run tests. It’s written in perl. I don’t know how up to date it is, but the idea of a collection of VMs running tests for a project is appealing. Especially if the build/test machines aren’t the same as the project server. If tinderbox has this built in, it goes to the top of the list. Anything else, I’d have to cobble something together.


Buildbot is written in python. I think mod_python is stable enough, but if not, CherryPy could probably run it. This opens up remoting potential as well, but I’m going to guess it doesn’t have the distributed nature I’m hoping for from Tinderbox. Since I haven’t heard much about Buildbot, I’m skeptical it will have all the features I might want.


I saw one review that said Cerberus doesn’t have a web interface. That might not be a deal breaker, but it makes it less likely to get a quick trial. Maybe Cerberus + CI::Reporter and ant JUnitHTMLReports would work. I’m also uncertain if Cerberus has a daemon. Somewhere it said it runs via cron, but that might be out of date. I think Luntbuild might do something similar. That might be acceptable, have cron run every 10 minutes and check for checkins and non-blocking. Or instead of cron-triggered have it checkin triggered via a SVN hook. But that makes reporting a bit more difficult, since something would apparently have to poll for the cerberus reports as well.

I’d love to hear any feedback.  Corrections, implementations, or other tools.

Ruby CI options

Here’s a link to Ruby CI apps:

More links:

Also look at CI::Reporter to convert Rake output Rspec and Test::Unit to a format Ant’s JUnitReport can handle.  That opens up potential use of the Java CI servers and CruiseControl.NET