Weekly Wednesday Webinar on Selenium & Sauce Labs

I’ve been working at Sauce Labs for a while now, helping enterprise users build test automation frameworks and implement continuous integration using Selenium & Sauce Labs.

In order to reach a larger audience — and to learn more about people’s challenges developing test automation — I’m going to be hosting a weekly webinar on using Selenium with Sauce Labs for test automation.

So, starting this week, each Wednesday during lunch (12:30pm Mountain Time) I’ll host a webinar / office hours.  I’ll begin with a brief presentation introducing the topic, followed by a demo (live coding — what could go wrong?), and then open it up for questions & comments.

The first webinar will be tomorrow at 12:30pm MST.  The topic is DesiredCapabilities.

I’ll talk about what desired capabilities are, how to use desired capabilities with Sauce Labs, and show how you can use the  Sauce Labs platform configurator to generate desired capabilities.  I’ll also talk about Sauce Labs specific capabilities used to report on tests and builds.

Register on EventBrite here: Selenium & Sauce Labs Webinar: Desired Capabilities

Join the WebEx directly: Selenium & Sauce Labs Webinar: Desired Capabilities

Contact me if you’d like a calendar invite or more info.

Continuous Testing

I’ve done a lot of setup and administration of continuous integration servers — cruise control (including variations cruisecontrol.rb and cruisecontrol.net), luntbuild, hudson, jenkins, bamboo, TFS, go. I have my favorites (and not so favorites.)

I’ve seen them used well and abused for continuous delivery and deployment as well.

Ironically, the ones that seem to work best seem to have lots of shell scripts written in text boxes. These are a nightmare to maintain, and impossible to recover from.

They often look like this:

  1. Set a bunch of environment variables in the CI tool
  2. Have a bunch of environment variables set (or overridden) within the job script
  3. Take a few more environment variables as user selected configuration parameters for the job
  4. Do some environment cleanup
  5. And then run the “build script” – a shell script that calls a python script that does a bunch more of the same stuff – and then eventually calls maven or rake
ENV_VAR1=$DEFAULT ENV_VAR2=$FRAMEWORK_DEFAULT /usr/local/bin/custom_shell build_script.sh -ex $OPTION1 $OPTION2

build_script.sh:

ENV_VAR3="DEFAULT"
ENV_VAR4=$OPTION1
ENV_VAR5=${OPTION2:-"DEFAULT"}</pre>
ENV_VAR5=`some_script.sh $ENV_VAR1 $ENV_VAR2`
export ENV_VAR3; export ENV_VAR4; export ENV_VAR5</pre>
/usr/bin/python cleanup_script.py $OPTION1 "hard_coded_value"</pre>
/usr/local/bin/custom_python build_script.py

One of the first things I do when starting on a new project is pull these scripts (and their associated environment variable settings) into version control.

And then I delete the jobs and never think about them again. I wish.

But I put them under version control because I don’t want to lose them.

And then I start refactoring.  And then I start trying to rebuild the functionality from scratch.  And then I congratulate the build engineer (who’s probably a developer trying to get work done on a completely different project) on his job security.

 

Acceptance Criteria Presentation

A few weeks ago I gave a presentation about acceptance criteria and agile testing to a team of developers I’m working with.

Some of the developers were familiar with agile processes & test driven development, but some were not. I introduced the idea of behavior driven development, with both rspec “it should” and gherkin “given/when/then” style syntax. I stressed that the exact syntax is not important, but consistency helps with understanding and can also help avoid “testers block”.

It’s a Java shop, but I didn’t get into the details of JBehave, Cucumber or any other frameworks.  I pointed out that you can write tests this way without implementing the automation steps and still get value — with the option of completing the automation later.  This is particularly valuable in a system that is difficult to test, or has external dependencies that aren’t easily mocked.

Here are the slides:

Acceptance Criteria Presentation [PDF] or [PPTX]

And a rough approximation below:


Acceptance Criteria

 

how to make it easier to know if what you’re doing is what they want you to do


What are Acceptance Criteria?

Image


By any other name…

● Requirements
● Use Cases
● Features
● Specifications
● User Stories
● Acceptance Tests
● Expected Results
● Tasks, Issues, Bugs, Defects,Tickets…


What are Acceptance Criteria?

Image


…would smell as sweet

 ● A way for business to say what they want
● A way for customers to describe what they need
● A way for developers to know when a feature is done
● A way for testers to know if something is working right


The “Agile” definition


User Stories

As a … [who]
I want to … [action]
So that I can … [result]


Acceptance Criteria

Given … [some precondition]
When … [action is performed]
Then … [expected outcome]

(Gherkin style)


Acceptance Criteria

Describe [the system] … [some context]

It (the system) should … [expected result]

(“should” syntax)


Shh…don’t tell the business guys

it’s programming

Image

but can be compiled by humans…and computers!


Inputs and Outputs

if I enter X + Y
then the result should be Z

f(x,y) = z

 


 Not a proof

or a function
or a test
or a requirement
or …

It’s just a way to help everyone understand


It should

  1. Describe “it”
    (feature/story/task/requirement/issue/defect/whatever)
  2. Give steps to perform
  3. List expected results

Show your work

Image

● Provide examples
● List preconditions
● Specify exceptions


A conversation not a specification

Do

● use plain English
● be precise
● be specific

Don’t…

● worry about covering everything
● include implementation details
● use jargon
● assume system knowledge


Thanks!

If you’re interested in learning how to turn your manual testing process into an agile automated test suite,  I can help.

contact me

Aaron Evans
aarone@one-shore.com

425-242-4304



 

 

Thoughts on NUnit and MSTest

I recently had a discussion with some other developers about NUnit and MSTest. My personal preference is based on familiarity — originally from JUnit and TestNG, but also with NUnit. NUnit was around long before MSTest, and MSTest was not available with Visual Studio Express. I personally, haven’t used MSTest so I scoured the internet and picked some colleagues brains to come up with this post.

Here was my original question:

Thoughts on NUnit vs MSTest? I like NUnit because it’s more familiar coming from JUnit/TestNG and doesn’t depend on Visual Studio runtime, but it has it’s drawbacks. Any other opinions?

Here’s one exchange:

I like NUnit also even though my experience is with MSTest… VS2012 now supports Nunit also! We support both in the CD infrastructure. Most anything you can do in MSTest can be done with Nunit with a little understanding.

What is it about NUnit that you like even though you’re experienced with MSTest?

I have found NUnit to be supported and maintained as a first class solution for testing across most tools/test runners. Sonar and Go support NUnit natively. MSTest results are still not supported in Go and in Sonar it’s an add-on plugin.

MSTest is only good if you are 100% in MS technologies for build and deployment using TFS build agents. In our mixed technology environment NUnit bridges them all smoother than MSTest.

And another:

While we support both in GO, MStest requires Visual Studio to be installed on the agent (ridiculous, imo).

NUnit usually runs faster (due to reduced I/O, since it doesn’t produce a separate folder for each test run with shadow-copied assemblies).

The testing community in general prefers NUnit, so it’s easier to find help/examples.

I could go on, but here’s a couple of great articles:

http://stackoverflow.com/questions/2367734/nunit-vs-visual-studio-2010s-mstest

http://nexussharp.wordpress.com/2012/04/16/showdown-mstest-vs-nunit/

Here are additional comments based on internet comments:

I agree that it’s ridiculous to require Visual Studio for test execution but I understand you can get around it with just  the Windows SDK and some environment tweaks.

I wasn’t aware before of all the file pollution MSTest does, both with the references and VSMDI files and all the temp files it generates.  With the Go agents we have set up neither of those are too big of issues anymore.

The syntax was my main preference, but I found you can use NUnit Assertions with MSTest — including Assert.That() and Assert.Throws() by doing this:

using Microsoft.VisualStudio.TestTools.UnitTesting; 
using Assert = NUnit.Framework.Assert;

But you can also use the independent Fluent Assertions which I think is even nicer.  I still prefer the NUnit attribute names though.

Here is a somewhat dated comparison of the NUnit, MSTest attribute syntax

XUnit / Gallio has some nice data driven features (http://blog.benhall.me.uk/2008/01/introduction-to-xunitnet-extensions.html) but some weird syntax such as [Fact] instead of [Test] (http://xunit.codeplex.com/wikipage?title=Comparisons) and I think data providers should be a separate implementation than tests – like NUnit’s [TestCase TestCaseSource(methodName) http://nunit.org/index.php?p=testCaseSource&r=2.5

One last thing I like about NUnit is that it’s standalone.  You could choose to include a specific version of the NUnit libraries with each project – and even fork if you want to add features because it’s open source, though that’s not really practical.  But the open source nature – and that it’s older – means that you can find lots of information on the intertubes.

I wasn’t too impressed with a the Native NUnit runner inside Visual Studio 2012, but Resharper makes it nice.  Some people on my team have complained about the extra weight Resharper adds, though I haven’t seen a problem (with 8GB RAM.) One complaint I can understand is the shortcut collisions R# introduces especially if your fingers were trained on Visual Studio, but for someone like me coming from Java IDEs the Resharper shortcuts are wonderful.

R# is a beautiful, beautiful thing – the extra weight is well worth it, what more could you ask for than IntelliJ in VS?

I can’t say I have much of a syntactical preference either way, but I would just say ‘Amen’ to earlier thoughts.

 

Running NUnit tests programmatically

I’m working on a test framework that needs to be run by less-technical testers. The tests are data driven from a spreadsheet (google docs spreadsheet API + gdata.)

Tests will be run locally (for now at least) since there isn’t a test lab available for remote execution, and no CI. I didn’t want to have to require users to install NUnit to execute tests.

At first I started by writing a main() class and rolling my own assertions. But I decided that the parameterized test features of NUnit were worth the effort of a little research. NUnit can, in fact, be run programmatically, though the execution appears less flexible than with other frameworks.

I created a TestRunner class with main function:


using System;
using NUnit.Core;
using NUnit.Framework.Constraints;
using NLog;

namespace oneshore.qa.testrunner
{
    class TestRunner
    {
        Logger log = NLog.LogManager.GetCurrentClassLogger();

        public static void Main(String[] args)
        {
            //get from command line args
            String pathToTestLibrary = "C:\\dev\\oneshore.Tests.DLL"; 

            DateTime startTime = System.DateTime.Now;
            log.Info("starting test execution...");

            TestRunner runner = new TestRunner();
            runner.run(pathToTestLibrary);

            log.Info("...test execution finished");
            DateTime finishTime = System.DateTime.Now;
            TimeSpan elapsedTime = finishTime.Subtract(startTime);
            log.Info("total elapsed time: " + elapsedTime);
        }

        public void run(String pathToTestLibrary)
        {
            CoreExtensions.Host.InitializeService();
            TestPackage testPackage = new TestPackage(@pathToTestLibrary);
            testPackage.BasePath = Path.GetDirectoryName(pathToTestLibrary);
            TestSuiteBuilder builder = new TestSuiteBuilder();
            TestSuite suite = builder.Build(testPackage);
            TestResult result = suite.Run(new NullListener(), TestFilter.Empty);

            log.Debug("has results? " + result.HasResults);
            log.Debug("results count: " + result.Results.Count);
            log.Debug("success? " + result.IsSuccess);
        }
    }
}

Link to gist of this code.

The advantage to running tests this way is that NUnit does not need to be installed (though DLLs for NUnit — nunit.core.dll & nunit.core.interfaces.dll — and any other dependencies like NLog need to be packaged with the TestRunner.) You can still write and execute your tests from NUnit while under development.

One disadvantage is that you don’t have the full test results by using the TestSuiteBuilder to bundle every test it finds into one suite. I’d like to find a way to improve that. You also can’t run more than one test assembly at the same time — you can create a nunit project xml for that — and at that point you might as well bundle the nunit test framework.

For now, my base test class (that each of my Nunit tests inherit from) reports via catching and counting assertion failures and writing to a log file. It can then use the Quality Center integration tool I described in an earlier post, though I’m planning on wiring it all together eventually, so anyone can run tests automatically by clicking on an icon, using a File Picker dialog to select the test library (see upcoming post) and have test results entered in QC.

This will allow distributed parameterized testing that can be done by anyone. I may try to set up a web UI like fitnesse for data driven tests as well.

Updating test results in QC using the QC OTA API explained

Yesterday I cleaned up and posted my example QCIntegration utility on GitHub.

While it works as a standalone tool, some people might not want to wade through the code to understand or modify it. So today, I’m going to try to explain how the OTA API works by recreating the steps as a blog post with explanation in a simple script.

I’ll start with an example using C# and then give an equivalent Python example. I’ll use the same scenario, updating test case results in QC, but if requested, I can also show how to get test steps from a test plan, or read & update defects in QC using the OTA library.

First, create a new project in Visual Studio (or SharpDevelop). You’ll need to add the OTAClient.dll as a reference. It is a COM library and contains the single interface TDConnection.

When searching for the library name it is called the “OTA COM Type Library”. The package is “TDApiOle80.” Since it is a COM library, it needs to use an interop for C#, but this is handled automatically by the IDE.

using TDAPIOLELib;
TDConnection conn = new TDConnection();

Now, let’s create a connection to your Quality Center server. You’ll need to know the URL of your QC Server and have valid login credentials with access to an existing Domain and Project.

Assuming you have quality center installed on your local machine (not a typical setup) you might have the following setup:

string qcUrl = "http://localhost:8080/qcbin";
string qcDomain = "oneshore";
string qcProject = "qa-site";
string qcLoginName = "aaron";
string qcPassword = "secret";

Note: I do not use this same password for my bank account

There are several ways to log in, but I’ll use the simplest here:

tdConn.InitConnectionEx(qcUrl);
tdConn.ConnectProjectEx(qcDomain, qcProject, qcLoginName, qcPassword);

Now you need to find your test sets that need updated. I typically use folder structure that goes something like:

Project – Iteration – Component – Feature

It’s a bit convoluted but here’s the code to get a testSet:

string testFolder = "Root\QASite\Sprint5\Dashboard\Recent Updates";
string testSet = "Recent Updates - New Defects Logged";

TestSetFactory tsFactory = (TestSetFactory)tdConn.TestSetFactory;
TestSetTreeManager tsTreeMgr = (TestSetTreeManager)tdConn.TestSetTreeManager;
TestSetFolder tsFolder = (TestSetFolder)tsTreeMgr.get_NodeByPath(testFolder);
List tsList = tsFolder.FindTestSets(testSetName, false, null);

The parameters for FindTestSets are a pattern to match, whether to match case, and a filter. Since I’m looking for a specific test set, I don’t bother with the other two parameters.
You could easily get a list of all test sets that haven’t been executed involving the recent updates feature by substituting this line:

List tsList = tsFolder.FindTestSets("recent updates", true, "status=No Run");

Now we want to loop through the test set and build a collection of tests to update. Note that we might have more than one test set in the folder and one or more subfolders as well:

foreach (TestSet testSet in tsList)
{
	TestSetFolder tsFolder = (TestSetFolder)testSet.TestSetFolder;
	TSTestFactory tsTestFactory = (TSTestFactory)testSet.TSTestFactory;
	List tsTestList = tsTestFactory.NewList("");

And finally, update each test case status:

    foreach (TSTest tsTest in tsTestList)
    {
        Run lastRun = (Run)tsTest.LastRun;

        // don't update test if it may have been modified by someone else
        if (lastRun == null)
        {
            RunFactory runFactory = (RunFactory)test.RunFactory;
            String date = DateTime.Now.ToString("yyyyMMddhhmmss");
            Run run = (Run)runFactory.AddItem("Run" + date);
            run.Status = "Pass";
            run.Post();
        }
    } // end loop of test cases

} // end outer loop of test sets

Of course you might want to add your actual test results. If you have a dictionary of test names and statuses, you can simply do this:

Dictionary testResults = new Dictionary();
testResults.Add("New defects in Recent Updates are red", "Pass");
testResults.Add("Resolved defects in Recent Updates are green", "Pass");
testResults.Add("Reopened defects in Recent Updates are bold", "Fail");

if (testResults.ContainsKey(tsTest.TestName))
{
    string status = testResults[tsTest.TestName];
    recordTestResult(tsTest, status);
}

That’s all for now. I’ll translate the example into Python tomorrow, but you’ll see it’s really quite straightforward.

Links: Unit Testing and Continous Integration with Flex and AsUnit

Just a bunch of links to tutorials on using AsUnit and continuous integration with Flex Projects:

A post on AsUnit by one of it’s creators,  Luke Baye’s:
http://asserttrue.com/articles/2006/03/10/AsUnit25

An example of a simple TestRunner mxml (AS2):
http://asserttrue.com/articles/2006/10/05/flex2-mxml-project-support-in-asunit

Luke’s post on continous integration:
http://lukebayes.blogspot.com/2005/11/continuous-integration-with-asunit.html

A good tutorial about using AsUnit (but with only a Flash testRunner):
http://www.insideria.com/2008/09/unit-testing-with-asunit.html

Another good tutorial about using AsUnit:
http://www.insideria.com/2008/05/anatomy-of-an-enterprise-flex-10.html

Discussion of one team’s unit test framework requirements:
http://www.eyefodder.com/blog/2006/06/unit_test_frameworks_for_as3_a.shtml

Weaknesses and strengths of FlexUnit and AsUnit:
http://www.eyefodder.com/blog/2006/07/flexunit_asunit_deathmatch_res.shtml

Story of their use of Continuous Integration:
http://www.eyefodder.com/blog/continuous_integration/
http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl.shtml
http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_1.shtml
http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_2.shtml
http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_3.shtml
http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_4.shtml
http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_5.shtml
http://www.eyefodder.com/blog/2006/05/continuous_integration_with_fl_6.shtml

A flash-oriented tutorial, but with good AsUnit explanations:
http://marstonstudio.com/2007/07/28/asunit-testing-with-flash-cs3-and-actionscript-3/