Creating a Bug in Quality Center using the OTA API

I’ve had several requests for more examples using the OTA API. This is a companion piece with the previous post about Executing Tests in Quality Center using the OTA API.

If you’re running tests outside of Quality Center, some are bound to fail. And if you have to write a defect in QC to associate with the test, why should you have to open it up just for that.

While I don’t currently have a access to a working copy of QC to test it, creating a bug is pretty straightforward.

Once you have a connection established

TDConnection connection = new TDConnection();
connection.InitConnectionEx(qcUrl);
connection.ConnectProjectEx(qcDomain, qcProject, qcLoginName, qcPassword);

You can get the BugFactory and create a bug with AddItem()

BugFactory bugFactory = connection.BugFactory;
Bug bug = bugFactory.AddItem(null);

AddItem() takes an object with data, but we can add the data manually

bug.Status = "New";
bug.Project = "QCIntegration";
bug.Summary = "Short description of the bug";
bug.DetectedBy = "Aaron Evans";
bug.AssignedTo = "Nobody";
bug.Priority = "Low";

Finally, when you’re done updating your bug, you call Post() to save it.

bug.Post();

There are some tricks for adding attachments and associating a bug with a test run, but I won’t go into that now.

Here’s the gist of this example. And it’s also included in the QCIntegration Examples.

Advertisements

Executing Tests in Quality Center using the OTA API

I’ve had several requests recently for more examples using the OTA API.   So I thought I’d pull some examples out of the comments section of this post https://fijiaaron.wordpress.com/2011/11/17/updating-test-results-in-qc-using-the-qc-ota-api-explained/ and give them their own space.

The  first example is executing tests.  In order to execute a Test, you first need to get a TestSet.  This corresponds to a specific TestSet in a TestLab.

Start off by opening a connection to Quality Center:

var connection = new TDConnection();
connection.InitConnectionEx(qcUrl);
connection.ConnectProjectEx(qcDomain, qcProject, qcLoginName, qcPassword);

Get a TestSetFactory and TestSetManager from to the Connection to find a TestSet.

TestSetFactory testSetFactory = connection.TestSetFactory;
TestSetTreeManager testSetTreeManager = connection.TestSetTreeManager;

Then search a specified path (starting with “ROOT/”) to for TestSets that match a given name. You can actually retrieve multiple TestSets if you specify part of a name, or all test sets in a folder and subfolders if you pass an empty string. If you’re pretty sure you have an exact match — you can just get the first result from the list.

TestSetFolder testSetFolder = (TestSetFolder) testSetTreeManager.NodeByPath[testSetPath];
List testSetList = testSetFolder.FindTestSets(testSetName);
TestSet testSet = testSetList[0];

It’s probably worth noting that the TDOLEAPI has it’s own concept of a list. It was written before C# with generic collections exists, so it takes a little work to convert a TDOLEAPI.List to a generic List. I won’t go into that now. Just treat it like an array, and iterate over it with an index.

Once you have a TestSet, you can run it by calling the scheduler.

TSScheduler scheduler = testSet.StartExecution("");
scheduler.RunAllLocally = true;
scheduler.Run();

And that’s all there is to it.

The empty string passed to StartExecution() is a server name, but as you can see, we want to run them locally in this example, so it’s blank. Run() can optionally take an object containing test data.

You can see the full example in this gist. I’ve also added it to the QCIntegration Examples.

Walkaway Automation

I was asked by a fellow tester about “walkaway automation”.  Here are my thoughts:

Walkaway automation is 100% automated

No setup steps or manual verification is needed.  It should run with a single command – that can be triggered automatically by the system.

Walkaway automation can run anywhere

It can run on a developer’s box before check-in or in a staging environment.

Walkaway automation does not have external dependencies

It does not depend on the environment, external data, or configuration.  Services are mocked faked, or managed in process.  Dependencies are stubbed or injected. It creates its own data.

Walkaway automation does not have side effects

It leaves no traces, doesn’t modify persistent state, or leave processes running.

Walkaway automation is not brittle

It’s not likely to break if the UI changes and doesn’t rely on functionality that is not part of the public API being tested.  It shouldn’t error because of race conditions

Walkaway automation can be easily maintained

It is modular.  A refactoring change should only need to be made in one place.  If something goes wrong, it sends notification.  It doesn’t try to do something tricky or fancy.  The code is easy to understand and simple.

Walkaway automation clearly documents exactly what is being tested

You can tell from the test name what it is supposed to do.  You know  what systems are being exercised and how.

Walkaway automation is completely deterministic

It should never fail for an unknown reason

Walkaway automation doesn’t produce false positives

When a test fails, it means something is wrong

Walkaway automation doesn’t give a false sense of comfort

It fails unless a specific check passes.  You should have confidence in a passing test.  This is the opposite of false positives.

Walkaway automation runs automatically, all the time

You shouldn’t have to tell it to run, or even be able to forget it runs on every check-in, build, or deployment (depending on the test level).

Deploying a web app with RPM

It’s just a tarball.  You unzip it and put it in a public folder.  Or you just copy a war and let tomcat / jboss unzip it.

Why would you want to go through the effort of creating an RPM (or other package) to install your app if it’s so simple.

There are several reasons you might want to:

  1. Using RPM helps you keep track of versions
  2. Using RPM allows your package manager to tell you if the app is installed
  3. Using RPM allows you to keep multiple deployments in sync
  4. Using RPM helps you to specify and install dependencies

Also, there are some things that you might forget:

  1. You might also have to edit a config file to specify
  2. Which means you might need to restart the server
  3. And you might want to clean up some files (such as logs) that are outside the deployment
  4. And you might forget to do one or more of these things

You could use a custom script or a tool like Capybara to handle this for you, or you could use an OS standard tool.

But I’m not really interested in arguing why.

Like Tennyson said, sometimes “Ours is not to reason why…

Here’s how you can do it:

Like many, I was intimidated by creating an RPM spec, but it’s not that hard.

An RPM is defined by a .spec file and built using the tool rpmbuild.

If you’re on a RedHat based system and you type vi example.spec it will generate  RPM spec template.

You can start by filling in the preamble which describes your project:

Name: 
Version: 
Release: 1%{?dist}
Summary: 
Group: 
License: 
URL: 
Source0: 
BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) 
BuildRequires: 
Requires:

%description

%prep
%setup -q

%build
%configure
make %{?_smp_mflags}

%install
rm -rf $RPM_BUILD_ROOT
make install DESTDIR=$RPM_BUILD_ROOT

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%doc

Name, Version, and Summary are self explanatory.

Release is 1 by default but if dist is defined, it will be included (i.e. ‘fc’ for fedora core).

Group helps determine where it will install in menus.  It’s not really important unless you care about that.

License can be GPL or whatever you want.  © 2012 One Shore Inc for instance

URL is an optional link with more info about your RPM

Source0 is the first source for the tarball.  You can just use Source if there is only 1 source.

BuildRoot is where it builds.  It is optional and can also be specified from the command line with –buildroot DIRECTORY

BuildRequires means a dependency that is needed to build your RPM.  If you’re not building from source, you don’t need this.

Requires specifies other RPMs needed before you can use this one.  I specified httpd but that’s not necessary.  You can specify a version http = 2.2.14 or range http >= 2  if you need to.

Some fields – notably Release and BuildRoot are filled in for you with macros.  You can just leave these unless you care.

Some other fields you can add include:

Target could should be noarch if it doesn’t matter.  It might be i386.  Target replaces the older BuildArch.   It be specified on the rpmbuild command line with the flag –target

Vendor can optionally specify the RPM vendor

%description  is an optional field that can include more details about your project

Now we move on to the RPM implementation details.

%prep contains steps to prepare.  You can cleanup, create directories, etc.

%setup If you have a tarball, the %setup macro will handle it without any options. -q means quiet.    Normally all you need is

%prep
%setup -q

%build  is where you compile your RPM.  If you don’t need to compile you can leave this section blank.  Typically, even if you do, all you have is something like:

./configure
make

%install is where you tell it how to deploy.  If you have a makefile you can just do

make install

If you want to copy files to /var/www you can use normal shell commands:

rm -rf $RPM_BUILD_ROOT
mkdir $RPM_BUILD_ROOT
mkdir -p -m0755 $RPM_BUILD_ROOT%/var/
mkdir -p -m0755 $RPM_BUILD_ROOT%/var/www/
mkdir -p -m0755 $RPM_BUILD_ROOT%/var/www/%{name}
cp -rp * $RPM_BUILD_ROOT

$RPM_BUILD_ROOT is an environment variable.  It’s where your RPM will be built.  We’re just cleaning it up  and then making sure our folders are in place inside the build root.

%{name} is a macro that prints the Name defined in the preamble

Then we copy the content of our unzipped tarball into the expected file structure

%clean is like make clean for your RPM build.  Typically just

rm -rf $RPM_BUILD_ROOT

%post is done after install.  You could bounce the webserver here for instance

service httpd restart

%preun is what needs done before uninstall

%files is a list of files that is included in the package.  If you want to install everything you can run this script in your install section:

find . -type f |sed -e 's/^\.//' > $RPM_BUILD_DIR/file.list.%{name}
find . -type l | sed -e 's,^\.,\%attr(-\,root\,root) ,' >> $RPM_BUILD_DIR/file.list.%{name}

And then refer to it in the files section:

%files -f ../file.list.%{name}
%defattr(-,root,root,-)


This will simply concatenate a list of files (and links) in the your project folder under BUILDS into a file named file.list.myproject.  And then print them out under the %files section.

%defattr is the default file attributes.  file mode (e.g  755) and dir mode can be just a – if no changes are needed. It’s in the format:   

(<file mode>, <user>, <group>, <dir mode>)

More about the files list is here http://www.rpm.org/max-rpm-snapshot/s1-rpm-specref-files-list-directives.html

More information on the RPM spec file can be found at http://www.rpm.org/max-rpm/s1-rpm-build-creating-spec-file.html

Once you have an RPM .spec file in place, you can then build your RPM with rpmbuild:

rpmbuild myproject.spec

More information on the rpmbuild command can be found at http://www.rpm.org/max-rpm-snapshot/rpmbuild.8.html

This intro has gotten quite a bit longer than expected.  In the next post, I’ll show how to build an RPM using ant and a template spec file rpm.spec.in