Sauce Connect tunnel for Sauce Labs real device cloud setup

I have helped a lot of Sauce Labs users, and one of the common challenges is setting up a Sauce Connect tunnel in order to test against your internal environment.

The first thing you need to do is download and install the tunnel. It is a standalone command line executable available for Windows, Mac, and Linux. I recommend using Linux.

You can download Sauce Connect at:

https://wiki.saucelabs.com/display/DOCS/Downloading+Sauce+Connect+Proxy

Once downloaded, you need to extract the package to get the ‘sc’ binary from the /bin directory.

wget https://saucelabs.com/downloads/sc-4.5.4-linux.tar.gz
tar -xvzf sc-4.5.4-linux.tar.gz 
cd sc-4.5.4-linux/bin

To start the tunnel, simply pass your Sauce Labs username and access key from the command line or set the SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables:

sc -u $SAUCE_USERNAME -k $SAUCE_ACCESS_KEY -i $TUNNEL_IDENTIFIER

There are quite a few other options that can be passed, and I won’t talk about them here, but you can see them by typing sc --help at the command line or by reading the documentation here:

https://wiki.saucelabs.com/display/DOCS/Sauce+Connect+Command+Line+Reference

In order to start a tunnel for the Sauce Labs mobile real device cloud, you need to pass 1 additional parameter to point the the mobile datacenter.  You also need to specify a different API KEY (see screenshot below). 

So your command should look something like this:

sc -x https://us1.api.testobject.com/sc/rest/v1 -u $SAUCELABS_USERNAME -k $SAUCECONNECT_API_KEY -i $TUNNEL_IDENTIFIER

See also the sample script sauce-connect.sh that includes additional parameters for setting different port number, log file, etc. (which will conflict if you run on the same host as another tunnel.

Here is the full documentation for real device tunnels:

https://wiki.saucelabs.com/display/DOCS/Sauce+Connect+Proxy+and+Real+Device+Testing

Set custom name for JUnit Parameterized tests

For JUnit parameterized tests, you can add a descriptive name based on the parameter like this:

@Parameters(name="{index}: {0} {1}")
public static Collection<Object[]> data() {
  return Arrays.asList(new Object[][] {
    { "x", "y" },
    { "foo", "bar" },
    { "hello", "world" }
  });
}

This will output test results like:

[0: x y]
[1: foo bar]
[2: hello world]

See also:


https://www.javacodegeeks.com/2013/04/junit-naming-individual-test-cases-in-a-parameterized-test.html

Checking XPATH and CSS Selectors in the browser console

There are a couple of magic functions you can use to inspect and parse an HTML document while you’re reading it in the browser.

$x() allows you to check an XPATH. It’s basically a shorthand for document.evaluate(xpath, document);

$$() allows you to check a CSS Selector. It’s basically a shorthand for document.querySelectorAll(css);

On Chrome $x() returns an XPathResult — just like document.evaluate() — which can only be inspected with the function iterateNext(). But on Safari and Firefox $x() will return an Array — just like $$() and document.querySelectorAll().

These shortcut functions can save some typing and mental effort.

Thanks to Andrew Krug from lazycoder.io for pointing out $x().

Keep Testing Weird

I’m at SauceCon 2019 in Austin, Texas which is a test automation conference put on by my employer, Sauce Labs.

The theme for the conference is “Keep Testin’ Weird” — a play on the city’s slogan “Keep Austin Weird”.

So I thought to myself, what’s weird about testing? It didn’t take long to come up with a long list. Testing is weird, and I’d love to hear all the weird stories everyone else has about testing.

Besides all the weird things that happen while testing — testing itself is pretty weird.

If you’re a software tester, you realize this the moment you’re asked to describe what you do for a living that it’s not like other professions. Personally, I’ve taken to just telling people “I work with computers” — and see how far down the rabbit hole they actually want to go. Which is a weird thing to do, but I guess I’m a little weird myself.

You kinda have to be weird to go into testing — or at least to stay at it very long. And not just because of all the weird stuff you encounter.

First of all, I don’t know anyone who ever deliberately went into testing. At least not until recently. It wasn’t really a known career path, and even for those who knew about it, testing wasn’t really highly regarded.

The act of testing itself is kinda weird. You’re not actually creating anything, but you have to be creative to be an effective tester. In fact, one of the qualities that make someone a good tester is that they like to break things. Testing is destructive — you have to destroy the product to save it. The greatest delight of a true tester is to find a truly catastrophic bug that is triggered in a really weird way.

You have to be a bit off to take pleasure in telling people that for all their hard work, it’s still not right. Testing is critical. Your job is to be, not just the bearer of bad news, but to actively go out looking for it, and since you have to justify your job, hoping to find it.

You have to be a bit immune to social criticism to be able to do so day after day. That means you probably don’t mind being weird.

Finding bugs is hard, especially when you’re handed this black box and you’re not only supposed to figure out how it works, but how it doesn’t. It takes a certain kind of perverse creativity to even come up with ways to test things effectively.

When you report a bug that can only happen under strange esoteric circumstances, it’s often dismissed as irrelevant and how that would never happen under real world conditions. But the real world is weird, and it’s just those types of weird issues that developers and designers don’t anticipate, that happen in production, and cut across layers to expose fundamental flaws or weaknesses in systems.

That you need to justify testing is really weird. Testing improves quality, and quality is the primary source of value, but testing isn’t considered valuable. Testing is often left out or cut short. And always being under-budgeted and under-resourced with inadequate time.

Testers have to have a varied skillset. You have to test things that you don’t understand. And you’re expected to find bugs and verify requirements. Without knowing the code, without understanding the requirements, and in many cases, without the practical experience of being an end user of the product you’re testing.

You’re not a developer, but you have to understand code. You’re not a product designer, but you have to understand the design and requirements in more depth than perhaps anyone else. You’re probably going to need to know not only how to test software, but how to build and deploy it.

How do you know when your job as a tester is done? Have you ever tried to define done? There’s development done, feature complete, deployed… But then there’s monitoring and maintenance, bug fixing and new features. Maybe you’re only really “done” when software has been deprecated and abandoned.

Is testing ever done? At some point you just have to draw a line and call it good enough. You can’t prove a negative, but your job is to declare a negative — this works with no issues — with a certain degree of certainty.

Test automation is weird. Writing automated tests while software is under development is like building the train as it’s running down the track, while the track is being laid — and testing that it works while it’s doing so.

Automation is meant to do the same thing over and over again, but why is it that test automation is so much harder to maintain than production code?

Automation code is throwaway code, but one of the greatest values comes when you can change the production code out from underneath it and the automation still passes — which means that the software isn’t broken. So you write test code to find bugs, but it actively prevents bugs from happening. That’s weird.

There is a lot more weirdness around testing and test automation but like any good tester knows, when you’re writing or speaking, you have to know when to stop, so I’ll end it here.

But I want to hear from you all. I’d like to ask you to share your thoughts and experiences about why testing is weird, what weirdness have you seen while testing, and what can we all do to keep testing weird?

Tensorflow Python Virtualenv setup

Learning a new technology can be challenging, and sometimes setup can slow you down.

I wanted to experiment with machine learning and just follow along with a a few tutorials.  But first, I had to get a compatible environment up and running.

So I thought I’d document it here, and after several bookmark/footnote tweets,

### install pyenv and virtualenv 
brew install pyenv
brew install pyenv-virtualenv

### initialize pyenv
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

### install python 3.6 with pyenv
pyenv install 3.6.5

### create a virtualenv for using tensorflow
pyenv virtualenv 3.6.5 tf
pyenv activate tf

### install dependencies in virtualenv using
pip pip install --upgrade pip
pip install tensorflow

### optionally use jupyter web interface
pip install jupyter
jupyter notebook

### use tensorflow in your app
import tensorflow as tf

I created a gist.

https://gist.github.com/fijiaaron/0755a3b28b9e8aaf1cd9d4181c3df346

Here’s the great intro to Deep Learning video I followed along with:

I’ll follow up with a quick screencast video showing how the steps look together for those who want to see it.

How do you practice TDD for Android development projects?

TDD is more of a strategy then a certain process or tool. The idea is that testing drives your development — writing tests first, making sure tests pass to indicate development is complete, and continuously testing on each change to make sure no regressions or new bugs slip in.

You can use Espresso, Robotium, or UIAutomator directly for automating the mobile app but testing the UI is inherently slow, can be brittle, and may not be possible (or easy) to write UI (or end-to-end) tests while an app is under development. The UI may not be testable, or back end services may not be complete at early stages.

With test driven development, you want to use your tests to inform what you develop. This informs what you should develop first, and it helps you to write your application in a way that is testable.

If you have some feature that needs tested — for example: delivering different size media (images & video) based on available bandwidth and screen size, testing this with the UI seems to make sense, since it is a UI feature.

But try writing your test the way you want it to look, not the way it actually behaves in the app. Start with your assertion:

assertThat(imageWidth).isEqualTo(deviceScreenWidth);

Now we try to satisfy that

First we need to get our values. Where does deviceScreenWidth come from? How do we determine imageWidth?

imageWidth is probably sent to the response processor so that when it sends the image URL it resizes the image — or selected the appropriately sized image.

That’s a design decision that’s already being influence by our tests. Maybe we want standard sizes — small, medium, large instead of trying to support every possible pixel width. Maybe isEqualTo should test within a range instead of just equal.

For deviceScreenWidth we need some representation of our device that includes it’s screen size. Do we get it from the userAgent or does the device send DisplayMetrics via an API? Is it passed from a service or a lookup table? Maybe we need a test of the function that passes a device identifier from the userAgent and calculates based on known values.

Now we know what code to write — and another test to write.

This can be a bit of a rabbit hole, but we don’t have to tackle everything at once.

In our unit test we just need to have an imageWidth and a deviceScreenWidth. We can make a note of what functions and parameters are needed to get this information, but for now we can just implement the functions immediately needed — and even make our first test pass by having those functions return hard coded values.

A nice simple test might look like this:

public void testImageCalculator() 
{
    device = new DeviceMetaData(SAMSUNG_GALAXY_S6);
    deviceScreenWidth = device.getDisplayMetrics(device).screen.width;
    imageWidth = getImageSizeForDevice(deviceScreenWidth);
    assertThat(imageWidth).isBetween(deviceScreenWidth, mediumDeviceMaxWidth);
}

Now we know what we need to develop next — the functions that make this test pass. A DeviceMetaData container class, something that gets display metrics for the device, and what we really care about (at this time) — the getImageSizeForDevice() function.

NOTE: This was originally an answer to a question on Quora

 

 

Testing web & mobile app interaction with Selenium & Appium

 

Sometimes there is a need to test the way two different apps interact or work together.

Say you have a mobile app for a coffee shop. When you place your order with the app, a notice shows up on the order terminal for the barista.  They can then fulfill the order and have it ready for you — no waiting in line.

As a customer
I want to place an order for coffee on my iPhone
So that it's ready when I get to the coffee shop

But if they can’t fulfill the order (maybe they’re out of caramel-soy-macchi-whatever mix) they can let the customer know so that they can cancel or place a new order without waiting to find out that their order is not available.

As a barista
I want to notify customers when their order can't be fulfilled
So that they can change or cancel their order

There are two different apps here, and two different actors.  This can make the test challenging.  The obvious solution is to automate both apps at the same time (the mobile app for the customer and the web-based point of sale terminal for the barista.

Your test might looks something like this:

The problem here is that coordination between the two apps can be tricky.  Synchronization and timing isn’t guaranteed and I’m not sure if the explicit waits will always handle this.

Also, it requires standing up both environments and making sure that the mobile app can communicate with your web app.  It can get tricky.  Not to mention it will be inherently slower and the odds of random failures increases.

Another thing you can do is test the majority of use cases independently.  This is hinted at by our two stories above.  One for the barista (web app) and a separate one for the customer (mobile app.)

Unless you have a really unique architecture, it’s likely that the two apps don’t actually know anything about each other.  They probably communicate through web services with a shared back end database or message queue.

Really, what you want to do is test each app independently and how it interacts with the service.  The service can be mocked or stubbed for some use cases, but for end-to-end tests, it makes sense to use the service.

So your test will now look something like this:

This requires a clear API and may require some backend interaction that is not normally exposed.  But the test is much cleaner (and reliable) and if exposed services require additional security you can have a separate test API endpoint or authorization token that enables the additional functionality.  In this case, that shouldn’t be necessary.

You may still want to perform a few end-to-end sanity tests to make sure the environments are communicating correctly and are compatible, but the number of these tests can be greatly reduced — and the speed and reliability of your test suite improved.