Scheduling tests to monitor websites

If you have access to your crontab you can set a Selenium script to run periodically. If you don’t have cron, you can use a VM (with Vagrant) or Container (with Docker) to get it.

Cron is available on Linux & Unix systems. On Windows, you can use Task Scheduler. On Mac, there is launchd, but it also includes cron (which wraps launchd).

You could also set up a job to run on a schedule using a continuous integration server such as Jenkins. Or write a simple, long running script that runs in the background and sleeps between executions.

I have a service that runs Selenium tests and monitoring for my clients, and use both cron and Jenkins for executing test runs regularly. I also have event-triggered tasks that can be triggered by a checkin or user request.

Each line represents a task with schedule in the following format:

#minute   #hour     #day      #month    #weekday  #command

# perform a task every weekday morning at 7am
*         7         *         *         1-5       wakeup.sh

# perform a task every hour
@hourly python selenium-monitor.py

You can edit crontab to create a task by typing crontab -e

You can view your crontab by typing crontab -l

If you just want to repeat your task within your script while it’s running, you can add a sleep statement and loop (either over an interval or until you kill the script).

#!/usr/bin/env python

from time import sleep
from selenium import webdriver

sites = ['https://google.com', 'https://bing.com', 'https://duck.com']

interval = 60 #seconds
iterations = 10 #times

def poll_site(url):
	driver = webdriver.Chrome()
	driver.get(url)
	title = driver.title
	driver.quit()
	return title

while (iterations > 0):
	for url in sites:
		print(poll_site(url))
	sleep(interval)
	iterations -= 1

See the example code on github:

Originally posted on Quora:

https://www.quora.com/How-can-I-schedule-simple-website-test-scripts-Selenium-to-run-regularly-like-Cron-jobs-and-notify-me-if-it-fails-for-free/answer/Aaron-Evans-56

Running NUnit tests programmatically

I’m working on a test framework that needs to be run by less-technical testers. The tests are data driven from a spreadsheet (google docs spreadsheet API + gdata.)

Tests will be run locally (for now at least) since there isn’t a test lab available for remote execution, and no CI. I didn’t want to have to require users to install NUnit to execute tests.

At first I started by writing a main() class and rolling my own assertions. But I decided that the parameterized test features of NUnit were worth the effort of a little research. NUnit can, in fact, be run programmatically, though the execution appears less flexible than with other frameworks.

I created a TestRunner class with main function:


using System;
using NUnit.Core;
using NUnit.Framework.Constraints;
using NLog;

namespace oneshore.qa.testrunner
{
    class TestRunner
    {
        Logger log = NLog.LogManager.GetCurrentClassLogger();

        public static void Main(String[] args)
        {
            //get from command line args
            String pathToTestLibrary = "C:\\dev\\oneshore.Tests.DLL"; 

            DateTime startTime = System.DateTime.Now;
            log.Info("starting test execution...");

            TestRunner runner = new TestRunner();
            runner.run(pathToTestLibrary);

            log.Info("...test execution finished");
            DateTime finishTime = System.DateTime.Now;
            TimeSpan elapsedTime = finishTime.Subtract(startTime);
            log.Info("total elapsed time: " + elapsedTime);
        }

        public void run(String pathToTestLibrary)
        {
            CoreExtensions.Host.InitializeService();
            TestPackage testPackage = new TestPackage(@pathToTestLibrary);
            testPackage.BasePath = Path.GetDirectoryName(pathToTestLibrary);
            TestSuiteBuilder builder = new TestSuiteBuilder();
            TestSuite suite = builder.Build(testPackage);
            TestResult result = suite.Run(new NullListener(), TestFilter.Empty);

            log.Debug("has results? " + result.HasResults);
            log.Debug("results count: " + result.Results.Count);
            log.Debug("success? " + result.IsSuccess);
        }
    }
}

Link to gist of this code.

The advantage to running tests this way is that NUnit does not need to be installed (though DLLs for NUnit — nunit.core.dll & nunit.core.interfaces.dll — and any other dependencies like NLog need to be packaged with the TestRunner.) You can still write and execute your tests from NUnit while under development.

One disadvantage is that you don’t have the full test results by using the TestSuiteBuilder to bundle every test it finds into one suite. I’d like to find a way to improve that. You also can’t run more than one test assembly at the same time — you can create a nunit project xml for that — and at that point you might as well bundle the nunit test framework.

For now, my base test class (that each of my Nunit tests inherit from) reports via catching and counting assertion failures and writing to a log file. It can then use the Quality Center integration tool I described in an earlier post, though I’m planning on wiring it all together eventually, so anyone can run tests automatically by clicking on an icon, using a File Picker dialog to select the test library (see upcoming post) and have test results entered in QC.

This will allow distributed parameterized testing that can be done by anyone. I may try to set up a web UI like fitnesse for data driven tests as well.