Scheduling tests to monitor websites

If you have access to your crontab you can set a Selenium script to run periodically. If you don’t have cron, you can use a VM (with Vagrant) or Container (with Docker) to get it.

Cron is available on Linux & Unix systems. On Windows, you can use Task Scheduler. On Mac, there is launchd, but it also includes cron (which wraps launchd).

You could also set up a job to run on a schedule using a continuous integration server such as Jenkins. Or write a simple, long running script that runs in the background and sleeps between executions.

I have a service that runs Selenium tests and monitoring for my clients, and use both cron and Jenkins for executing test runs regularly. I also have event-triggered tasks that can be triggered by a checkin or user request.

Each line represents a task with schedule in the following format:

#minute   #hour     #day      #month    #weekday  #command

# perform a task every weekday morning at 7am
*         7         *         *         1-5       wakeup.sh

# perform a task every hour
@hourly python selenium-monitor.py

You can edit crontab to create a task by typing crontab -e

You can view your crontab by typing crontab -l

If you just want to repeat your task within your script while it’s running, you can add a sleep statement and loop (either over an interval or until you kill the script).

#!/usr/bin/env python

from time import sleep
from selenium import webdriver

sites = ['https://google.com', 'https://bing.com', 'https://duck.com']

interval = 60 #seconds
iterations = 10 #times

def poll_site(url):
	driver = webdriver.Chrome()
	driver.get(url)
	title = driver.title
	driver.quit()
	return title

while (iterations > 0):
	for url in sites:
		print(poll_site(url))
	sleep(interval)
	iterations -= 1

See the example code on github:

Originally posted on Quora:

https://www.quora.com/How-can-I-schedule-simple-website-test-scripts-Selenium-to-run-regularly-like-Cron-jobs-and-notify-me-if-it-fails-for-free/answer/Aaron-Evans-56

Sauce Connect tunnel for Sauce Labs real device cloud setup

I have helped a lot of Sauce Labs users, and one of the common challenges is setting up a Sauce Connect tunnel in order to test against your internal environment.

The first thing you need to do is download and install the tunnel. It is a standalone command line executable available for Windows, Mac, and Linux. I recommend using Linux.

You can download Sauce Connect at:

https://wiki.saucelabs.com/display/DOCS/Downloading+Sauce+Connect+Proxy

Once downloaded, you need to extract the package to get the ‘sc’ binary from the /bin directory.

wget https://saucelabs.com/downloads/sc-4.5.4-linux.tar.gz
tar -xvzf sc-4.5.4-linux.tar.gz 
cd sc-4.5.4-linux/bin

To start the tunnel, simply pass your Sauce Labs username and access key from the command line or set the SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables:

sc -u $SAUCE_USERNAME -k $SAUCE_ACCESS_KEY -i $TUNNEL_IDENTIFIER

There are quite a few other options that can be passed, and I won’t talk about them here, but you can see them by typing sc --help at the command line or by reading the documentation here:

https://wiki.saucelabs.com/display/DOCS/Sauce+Connect+Command+Line+Reference

In order to start a tunnel for the Sauce Labs mobile real device cloud, you need to pass 1 additional parameter to point the the mobile datacenter.  You also need to specify a different API KEY (see screenshot below). 

So your command should look something like this:

sc -x https://us1.api.testobject.com/sc/rest/v1 -u $SAUCELABS_USERNAME -k $SAUCECONNECT_API_KEY -i $TUNNEL_IDENTIFIER

See also the sample script sauce-connect.sh that includes additional parameters for setting different port number, log file, etc. (which will conflict if you run on the same host as another tunnel.

Here is the full documentation for real device tunnels:

https://wiki.saucelabs.com/display/DOCS/Sauce+Connect+Proxy+and+Real+Device+Testing

Set custom name for JUnit Parameterized tests

For JUnit parameterized tests, you can add a descriptive name based on the parameter like this:

@Parameters(name="{index}: {0} {1}")
public static Collection<Object[]> data() {
  return Arrays.asList(new Object[][] {
    { "x", "y" },
    { "foo", "bar" },
    { "hello", "world" }
  });
}

This will output test results like:

[0: x y]
[1: foo bar]
[2: hello world]

See also:


https://www.javacodegeeks.com/2013/04/junit-naming-individual-test-cases-in-a-parameterized-test.html

Checking XPATH and CSS Selectors in the browser console

There are a couple of magic functions you can use to inspect and parse an HTML document while you’re reading it in the browser.

$x() allows you to check an XPATH. It’s basically a shorthand for document.evaluate(xpath, document);

$$() allows you to check a CSS Selector. It’s basically a shorthand for document.querySelectorAll(css);

On Chrome $x() returns an XPathResult — just like document.evaluate() — which can only be inspected with the function iterateNext(). But on Safari and Firefox $x() will return an Array — just like $$() and document.querySelectorAll().

These shortcut functions can save some typing and mental effort.

Thanks to Andrew Krug from lazycoder.io for pointing out $x().

Keep Testing Weird

I’m at SauceCon 2019 in Austin, Texas which is a test automation conference put on by my employer, Sauce Labs.

The theme for the conference is “Keep Testin’ Weird” — a play on the city’s slogan “Keep Austin Weird”.

So I thought to myself, what’s weird about testing? It didn’t take long to come up with a long list. Testing is weird, and I’d love to hear all the weird stories everyone else has about testing.

Besides all the weird things that happen while testing — testing itself is pretty weird.

If you’re a software tester, you realize this the moment you’re asked to describe what you do for a living that it’s not like other professions. Personally, I’ve taken to just telling people “I work with computers” — and see how far down the rabbit hole they actually want to go. Which is a weird thing to do, but I guess I’m a little weird myself.

You kinda have to be weird to go into testing — or at least to stay at it very long. And not just because of all the weird stuff you encounter.

First of all, I don’t know anyone who ever deliberately went into testing. At least not until recently. It wasn’t really a known career path, and even for those who knew about it, testing wasn’t really highly regarded.

The act of testing itself is kinda weird. You’re not actually creating anything, but you have to be creative to be an effective tester. In fact, one of the qualities that make someone a good tester is that they like to break things. Testing is destructive — you have to destroy the product to save it. The greatest delight of a true tester is to find a truly catastrophic bug that is triggered in a really weird way.

You have to be a bit off to take pleasure in telling people that for all their hard work, it’s still not right. Testing is critical. Your job is to be, not just the bearer of bad news, but to actively go out looking for it, and since you have to justify your job, hoping to find it.

You have to be a bit immune to social criticism to be able to do so day after day. That means you probably don’t mind being weird.

Finding bugs is hard, especially when you’re handed this black box and you’re not only supposed to figure out how it works, but how it doesn’t. It takes a certain kind of perverse creativity to even come up with ways to test things effectively.

When you report a bug that can only happen under strange esoteric circumstances, it’s often dismissed as irrelevant and how that would never happen under real world conditions. But the real world is weird, and it’s just those types of weird issues that developers and designers don’t anticipate, that happen in production, and cut across layers to expose fundamental flaws or weaknesses in systems.

That you need to justify testing is really weird. Testing improves quality, and quality is the primary source of value, but testing isn’t considered valuable. Testing is often left out or cut short. And always being under-budgeted and under-resourced with inadequate time.

Testers have to have a varied skillset. You have to test things that you don’t understand. And you’re expected to find bugs and verify requirements. Without knowing the code, without understanding the requirements, and in many cases, without the practical experience of being an end user of the product you’re testing.

You’re not a developer, but you have to understand code. You’re not a product designer, but you have to understand the design and requirements in more depth than perhaps anyone else. You’re probably going to need to know not only how to test software, but how to build and deploy it.

How do you know when your job as a tester is done? Have you ever tried to define done? There’s development done, feature complete, deployed… But then there’s monitoring and maintenance, bug fixing and new features. Maybe you’re only really “done” when software has been deprecated and abandoned.

Is testing ever done? At some point you just have to draw a line and call it good enough. You can’t prove a negative, but your job is to declare a negative — this works with no issues — with a certain degree of certainty.

Test automation is weird. Writing automated tests while software is under development is like building the train as it’s running down the track, while the track is being laid — and testing that it works while it’s doing so.

Automation is meant to do the same thing over and over again, but why is it that test automation is so much harder to maintain than production code?

Automation code is throwaway code, but one of the greatest values comes when you can change the production code out from underneath it and the automation still passes — which means that the software isn’t broken. So you write test code to find bugs, but it actively prevents bugs from happening. That’s weird.

There is a lot more weirdness around testing and test automation but like any good tester knows, when you’re writing or speaking, you have to know when to stop, so I’ll end it here.

But I want to hear from you all. I’d like to ask you to share your thoughts and experiences about why testing is weird, what weirdness have you seen while testing, and what can we all do to keep testing weird?

Tensorflow Python Virtualenv setup

Learning a new technology can be challenging, and sometimes setup can slow you down.

I wanted to experiment with machine learning and just follow along with a a few tutorials.  But first, I had to get a compatible environment up and running.

So I thought I’d document it here, and after several bookmark/footnote tweets,

### install pyenv and virtualenv 
brew install pyenv
brew install pyenv-virtualenv

### initialize pyenv
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

### install python 3.6 with pyenv
pyenv install 3.6.5

### create a virtualenv for using tensorflow
pyenv virtualenv 3.6.5 tf
pyenv activate tf

### install dependencies in virtualenv using
pip pip install --upgrade pip
pip install tensorflow

### optionally use jupyter web interface
pip install jupyter
jupyter notebook

### use tensorflow in your app
import tensorflow as tf

I created a gist.

https://gist.github.com/fijiaaron/0755a3b28b9e8aaf1cd9d4181c3df346

Here’s the great intro to Deep Learning video I followed along with:

I’ll follow up with a quick screencast video showing how the steps look together for those who want to see it.

How do you practice TDD for Android development projects?

TDD is more of a strategy then a certain process or tool. The idea is that testing drives your development — writing tests first, making sure tests pass to indicate development is complete, and continuously testing on each change to make sure no regressions or new bugs slip in.

You can use Espresso, Robotium, or UIAutomator directly for automating the mobile app but testing the UI is inherently slow, can be brittle, and may not be possible (or easy) to write UI (or end-to-end) tests while an app is under development. The UI may not be testable, or back end services may not be complete at early stages.

With test driven development, you want to use your tests to inform what you develop. This informs what you should develop first, and it helps you to write your application in a way that is testable.

If you have some feature that needs tested — for example: delivering different size media (images & video) based on available bandwidth and screen size, testing this with the UI seems to make sense, since it is a UI feature.

But try writing your test the way you want it to look, not the way it actually behaves in the app. Start with your assertion:

assertThat(imageWidth).isEqualTo(deviceScreenWidth);

Now we try to satisfy that

First we need to get our values. Where does deviceScreenWidth come from? How do we determine imageWidth?

imageWidth is probably sent to the response processor so that when it sends the image URL it resizes the image — or selected the appropriately sized image.

That’s a design decision that’s already being influence by our tests. Maybe we want standard sizes — small, medium, large instead of trying to support every possible pixel width. Maybe isEqualTo should test within a range instead of just equal.

For deviceScreenWidth we need some representation of our device that includes it’s screen size. Do we get it from the userAgent or does the device send DisplayMetrics via an API? Is it passed from a service or a lookup table? Maybe we need a test of the function that passes a device identifier from the userAgent and calculates based on known values.

Now we know what code to write — and another test to write.

This can be a bit of a rabbit hole, but we don’t have to tackle everything at once.

In our unit test we just need to have an imageWidth and a deviceScreenWidth. We can make a note of what functions and parameters are needed to get this information, but for now we can just implement the functions immediately needed — and even make our first test pass by having those functions return hard coded values.

A nice simple test might look like this:

public void testImageCalculator() 
{
    device = new DeviceMetaData(SAMSUNG_GALAXY_S6);
    deviceScreenWidth = device.getDisplayMetrics(device).screen.width;
    imageWidth = getImageSizeForDevice(deviceScreenWidth);
    assertThat(imageWidth).isBetween(deviceScreenWidth, mediumDeviceMaxWidth);
}

Now we know what we need to develop next — the functions that make this test pass. A DeviceMetaData container class, something that gets display metrics for the device, and what we really care about (at this time) — the getImageSizeForDevice() function.

NOTE: This was originally an answer to a question on Quora