Weekly Wednesday Webinar on Selenium & Sauce Labs

I’ve been working at Sauce Labs for a while now, helping enterprise users build test automation frameworks and implement continuous integration using Selenium & Sauce Labs.

In order to reach a larger audience — and to learn more about people’s challenges developing test automation — I’m going to be hosting a weekly webinar on using Selenium with Sauce Labs for test automation.

So, starting this week, each Wednesday during lunch (12:30pm Mountain Time) I’ll host a webinar / office hours.  I’ll begin with a brief presentation introducing the topic, followed by a demo (live coding — what could go wrong?), and then open it up for questions & comments.

The first webinar will be tomorrow at 12:30pm MST.  The topic is DesiredCapabilities.

I’ll talk about what desired capabilities are, how to use desired capabilities with Sauce Labs, and show how you can use the  Sauce Labs platform configurator to generate desired capabilities.  I’ll also talk about Sauce Labs specific capabilities used to report on tests and builds.

Register on EventBrite here: Selenium & Sauce Labs Webinar: Desired Capabilities

Join the WebEx directly: Selenium & Sauce Labs Webinar: Desired Capabilities

Contact me if you’d like a calendar invite or more info.

Advertisements

Acceptance Criteria Presentation

A few weeks ago I gave a presentation about acceptance criteria and agile testing to a team of developers I’m working with.

Some of the developers were familiar with agile processes & test driven development, but some were not. I introduced the idea of behavior driven development, with both rspec “it should” and gherkin “given/when/then” style syntax. I stressed that the exact syntax is not important, but consistency helps with understanding and can also help avoid “testers block”.

It’s a Java shop, but I didn’t get into the details of JBehave, Cucumber or any other frameworks.  I pointed out that you can write tests this way without implementing the automation steps and still get value — with the option of completing the automation later.  This is particularly valuable in a system that is difficult to test, or has external dependencies that aren’t easily mocked.

Here are the slides:

Acceptance Criteria Presentation [PDF] or [PPTX]

And a rough approximation below:


Acceptance Criteria

 

how to make it easier to know if what you’re doing is what they want you to do


What are Acceptance Criteria?

Image


By any other name…

● Requirements
● Use Cases
● Features
● Specifications
● User Stories
● Acceptance Tests
● Expected Results
● Tasks, Issues, Bugs, Defects,Tickets…


What are Acceptance Criteria?

Image


…would smell as sweet

 ● A way for business to say what they want
● A way for customers to describe what they need
● A way for developers to know when a feature is done
● A way for testers to know if something is working right


The “Agile” definition


User Stories

As a … [who]
I want to … [action]
So that I can … [result]


Acceptance Criteria

Given … [some precondition]
When … [action is performed]
Then … [expected outcome]

(Gherkin style)


Acceptance Criteria

Describe [the system] … [some context]

It (the system) should … [expected result]

(“should” syntax)


Shh…don’t tell the business guys

it’s programming

Image

but can be compiled by humans…and computers!


Inputs and Outputs

if I enter X + Y
then the result should be Z

f(x,y) = z

 


 Not a proof

or a function
or a test
or a requirement
or …

It’s just a way to help everyone understand


It should

  1. Describe “it”
    (feature/story/task/requirement/issue/defect/whatever)
  2. Give steps to perform
  3. List expected results

Show your work

Image

● Provide examples
● List preconditions
● Specify exceptions


A conversation not a specification

Do

● use plain English
● be precise
● be specific

Don’t…

● worry about covering everything
● include implementation details
● use jargon
● assume system knowledge


Thanks!

If you’re interested in learning how to turn your manual testing process into an agile automated test suite,  I can help.

contact me

Aaron Evans
aarone@one-shore.com

425-242-4304



 

 

Thoughts on NUnit and MSTest

I recently had a discussion with some other developers about NUnit and MSTest. My personal preference is based on familiarity — originally from JUnit and TestNG, but also with NUnit. NUnit was around long before MSTest, and MSTest was not available with Visual Studio Express. I personally, haven’t used MSTest so I scoured the internet and picked some colleagues brains to come up with this post.

Here was my original question:

Thoughts on NUnit vs MSTest? I like NUnit because it’s more familiar coming from JUnit/TestNG and doesn’t depend on Visual Studio runtime, but it has it’s drawbacks. Any other opinions?

Here’s one exchange:

I like NUnit also even though my experience is with MSTest… VS2012 now supports Nunit also! We support both in the CD infrastructure. Most anything you can do in MSTest can be done with Nunit with a little understanding.

What is it about NUnit that you like even though you’re experienced with MSTest?

I have found NUnit to be supported and maintained as a first class solution for testing across most tools/test runners. Sonar and Go support NUnit natively. MSTest results are still not supported in Go and in Sonar it’s an add-on plugin.

MSTest is only good if you are 100% in MS technologies for build and deployment using TFS build agents. In our mixed technology environment NUnit bridges them all smoother than MSTest.

And another:

While we support both in GO, MStest requires Visual Studio to be installed on the agent (ridiculous, imo).

NUnit usually runs faster (due to reduced I/O, since it doesn’t produce a separate folder for each test run with shadow-copied assemblies).

The testing community in general prefers NUnit, so it’s easier to find help/examples.

I could go on, but here’s a couple of great articles:

http://stackoverflow.com/questions/2367734/nunit-vs-visual-studio-2010s-mstest

http://nexussharp.wordpress.com/2012/04/16/showdown-mstest-vs-nunit/

Here are additional comments based on internet comments:

I agree that it’s ridiculous to require Visual Studio for test execution but I understand you can get around it with just  the Windows SDK and some environment tweaks.

I wasn’t aware before of all the file pollution MSTest does, both with the references and VSMDI files and all the temp files it generates.  With the Go agents we have set up neither of those are too big of issues anymore.

The syntax was my main preference, but I found you can use NUnit Assertions with MSTest — including Assert.That() and Assert.Throws() by doing this:

using Microsoft.VisualStudio.TestTools.UnitTesting; 
using Assert = NUnit.Framework.Assert;

But you can also use the independent Fluent Assertions which I think is even nicer.  I still prefer the NUnit attribute names though.

Here is a somewhat dated comparison of the NUnit, MSTest attribute syntax

XUnit / Gallio has some nice data driven features (http://blog.benhall.me.uk/2008/01/introduction-to-xunitnet-extensions.html) but some weird syntax such as [Fact] instead of [Test] (http://xunit.codeplex.com/wikipage?title=Comparisons) and I think data providers should be a separate implementation than tests – like NUnit’s [TestCase TestCaseSource(methodName) http://nunit.org/index.php?p=testCaseSource&r=2.5

One last thing I like about NUnit is that it’s standalone.  You could choose to include a specific version of the NUnit libraries with each project – and even fork if you want to add features because it’s open source, though that’s not really practical.  But the open source nature – and that it’s older – means that you can find lots of information on the intertubes.

I wasn’t too impressed with a the Native NUnit runner inside Visual Studio 2012, but Resharper makes it nice.  Some people on my team have complained about the extra weight Resharper adds, though I haven’t seen a problem (with 8GB RAM.) One complaint I can understand is the shortcut collisions R# introduces especially if your fingers were trained on Visual Studio, but for someone like me coming from Java IDEs the Resharper shortcuts are wonderful.

R# is a beautiful, beautiful thing – the extra weight is well worth it, what more could you ask for than IntelliJ in VS?

I can’t say I have much of a syntactical preference either way, but I would just say ‘Amen’ to earlier thoughts.

 

Running NUnit tests programmatically

I’m working on a test framework that needs to be run by less-technical testers. The tests are data driven from a spreadsheet (google docs spreadsheet API + gdata.)

Tests will be run locally (for now at least) since there isn’t a test lab available for remote execution, and no CI. I didn’t want to have to require users to install NUnit to execute tests.

At first I started by writing a main() class and rolling my own assertions. But I decided that the parameterized test features of NUnit were worth the effort of a little research. NUnit can, in fact, be run programmatically, though the execution appears less flexible than with other frameworks.

I created a TestRunner class with main function:


using System;
using NUnit.Core;
using NUnit.Framework.Constraints;
using NLog;

namespace oneshore.qa.testrunner
{
    class TestRunner
    {
        Logger log = NLog.LogManager.GetCurrentClassLogger();

        public static void Main(String[] args)
        {
            //get from command line args
            String pathToTestLibrary = "C:\\dev\\oneshore.Tests.DLL"; 

            DateTime startTime = System.DateTime.Now;
            log.Info("starting test execution...");

            TestRunner runner = new TestRunner();
            runner.run(pathToTestLibrary);

            log.Info("...test execution finished");
            DateTime finishTime = System.DateTime.Now;
            TimeSpan elapsedTime = finishTime.Subtract(startTime);
            log.Info("total elapsed time: " + elapsedTime);
        }

        public void run(String pathToTestLibrary)
        {
            CoreExtensions.Host.InitializeService();
            TestPackage testPackage = new TestPackage(@pathToTestLibrary);
            testPackage.BasePath = Path.GetDirectoryName(pathToTestLibrary);
            TestSuiteBuilder builder = new TestSuiteBuilder();
            TestSuite suite = builder.Build(testPackage);
            TestResult result = suite.Run(new NullListener(), TestFilter.Empty);

            log.Debug("has results? " + result.HasResults);
            log.Debug("results count: " + result.Results.Count);
            log.Debug("success? " + result.IsSuccess);
        }
    }
}

Link to gist of this code.

The advantage to running tests this way is that NUnit does not need to be installed (though DLLs for NUnit — nunit.core.dll & nunit.core.interfaces.dll — and any other dependencies like NLog need to be packaged with the TestRunner.) You can still write and execute your tests from NUnit while under development.

One disadvantage is that you don’t have the full test results by using the TestSuiteBuilder to bundle every test it finds into one suite. I’d like to find a way to improve that. You also can’t run more than one test assembly at the same time — you can create a nunit project xml for that — and at that point you might as well bundle the nunit test framework.

For now, my base test class (that each of my Nunit tests inherit from) reports via catching and counting assertion failures and writing to a log file. It can then use the Quality Center integration tool I described in an earlier post, though I’m planning on wiring it all together eventually, so anyone can run tests automatically by clicking on an icon, using a File Picker dialog to select the test library (see upcoming post) and have test results entered in QC.

This will allow distributed parameterized testing that can be done by anyone. I may try to set up a web UI like fitnesse for data driven tests as well.