Get a range of tests from Quality Center in SQL Server

SQL Server doesn’t have a LIMIT function like MySQL, but a little digging with Google turned up a query that looks something like this:


SELECT test.*
FROM (
SELECT test.*, ROW_NUMBER() OVER (ORDER BY TS_TEST_ID) AS row FROM test
) test
WHERE test.row BETWEEN 10 and 15

based on this post on Stack Overflow:
http://stackoverflow.com/questions/5151013/limit-style-functionality-in-ms-sql-server-2005

Advertisements

Integrating with QC — is it really worth it?

I’ve finally started writing about the work I did last year integrating HP Quality Center with JUnit (and with Bugzilla) and there seems to be some genuine interest in that, but as Elisabeth Hendrickson pointed out in tweet:

@fijiaaron I’m curious if, after your experience w/Junit+QC, you see this as generally beneficial, or just a way to clean up a legacy mess?

It’s really not a useful task, even if it was a challenging technical accomplishment and a capability with significant demand. The answer should be “don’t do that” — make the price of having repeatable, automated, continuous integration tests be “use good tools.”

Do you give them what they want or do you teach them the right way to do it? Does this crutch make the overall software testing practice better or worse? Tactically, it might make one organization suck less, but strategically, does it do more harm than good? Wouldn’t it be better to move to more lightweight (and user friendly) tools that make integration easier — and have a working process instead of tools enforcing policies to monitor?

The answer is, of course, yes. But the best case isn’t always what you’re given. If I were Robert Oppenheimer, I think I’d still build the bomb. More Junit + Quality Center integration information coming up (after I get settled in.)

Lately…moving, looking for work, flying etc.

It’s been a long time since I wrote anything other than a tech related post here. I’m going to try to change that. Just think, this used to be a travelogue. That’s where the name ‘fijiaaron’ came from.

Lately I’ve been busy moving. We moved from south Renton (soon to become Kent) back to Bellevue. It was for complicated reasons, and there has been some drama around that, but in short, it’s because we want to move back to Ecuador.

So moving has been taking the majority of my time the last couple weeks. Even though Kelsey has been doing the majority of the work. I discovered Saturday morning that my drivers license is expired –Happy Birthday– luckily we had a friend who could help us move and drove the moving truck.

Thanks Jonah and all the rest who helped load & unload. BBQ and surfing sometime soon.

At the same time I find myself unexpectedly unemployed and with a higher rent. Two years ago I cut expenses and tried to make a go of it doing freelance consulting with One Shore but had to go back to work after 9 months just as a small amount of money started to trickle in.

Now I’d like to try again. I’m more focussed, a better developer & tester, and have learned a lot. But I’m low on cash, and if we plan to go to Ecuador at the end of the year, I need to build up reserves.

So I don’t know what to do. I’ve bid on a few development projects, but I’ve been answering recruiter calls all day the last few — even while still only half settled in to our new house.

What I’d really like to do is work on my startup. Second to that I’d like to consulting and some freelance work. Third is a short term contract, preferably with a short commute — I’m thinking about canvassing the Eastgate office park to see if there’s any work for me. A distant next choice would be a longer term role with telecommute that I can continue in Ecuador.

Kelsey & Jama got me a flying lesson with John LaPorta. He was my old flight instructor and I haven’t been flying since I got my license in 2007. It was wonderful, but it only makes me want to do more. Harmon came with us and that was special. He loved it but Kelsey says he wouldn’t let her let go of him. She’s afraid of flying. Or at least my flying.

When I turned 9 I went flying with my dad (who didn’t finish getting his license when he got married.) I had to save up two years birthdays for it (and my little brother got to come along for free!)

Integrating JUnit with HP Quality Center – part 2

Integrating JUnit tests with HP/Mercury Quality Center

Part 2: reporting annotation converage using a base class

In my previous post I talked about adding an annotation to JUnit test cases that identified corresponding manual test cases defined in Quality Center. In this post, I’ll describe how I used those annotations to create a coverage report by having my annotated test cases extend a base class.

Once the test classes were properly annotated, every unit test was made to extend a base class. (This only works with JUnit 4 because JUnit 3 requires extending junit.framework.TestCase). The base class uses reflection to get the test name and report coverage from the annotations.

public class TestBase {
	@Rule
	public TestName testName = new TestName();
	protected static String previousTestName;
	protected static boolean isFirstRunMethod; // this is to check for a class with more than one method

	protected static final Logger log = Logger.getLogger("qcCoverageReport");

	protected static final String COVERAGE_REPORT_FILENAME = "qcCoverageReport.csv";
	protected static final String COVERAGE_REPORT_DELIMITER = ",";
	protected static final boolean COVERAGE_REPORT_APPEND = true;


	@BeforeClass
	public static void init() {
		PropertyConfigurator.configure("log4j.properties");
		isFirstRunMethod = true;
	}

	@Before
	public static void setUp() {
		if (! executeTests()) {
		   fail("creating coverage report");
		}
	}

	@After
	public void tearDown() {
		printQCTestCaseCoverage();
		writeCoverageReport(buildCoverageReport());
		isFirstRunMethod = false;
		previousTestName = testName.getMethodName();
	}

	// this is a simple method that just writes test coverage to a log file
	private void printQCTestCaseCoverage() {
		try {
			Class clazz = Class.forName(this.getClass().getName());
			Method method = clazz.getMethod(testName.getMethodName());
			if (method.isAnnotationPresent(QCTestCases.class)) {
				log.info("Class [" + clazz.getName() + "] test method [" + method.getName() + "].");
				QCTestCases qcTestCases = method.getAnnotation(QCTestCases.class);
				for (String element : qcTestCases.covered()) {
					log.info("QC Test Cases Covered [" + element.toString() + "].");
				}
				for (String element : qcTestCases.related()) {
					log.info("QC Test Cases Related [" + element.toString() + "].");
				}
			}
		} catch (Throwable t) {
			t.printStackTrace(System.err);
		}
	}

	// this is a more complex method that builds a collection and eliminates duplicates 
	public StringBuilder buildCoverageReport() {
		StringBuilder coverage = new StringBuilder();

		// get test case information via reflection
		String packageName = this.getClass().getPackage().getName();
		String className = this.getClass().getSimpleName();
		String methodName = testName.getMethodName();
		Boolean isSameAsLastMethod = false;

		// see if it's the same test run again (e.g. parameterized)
		if (methodName.equals(previousTestName)) {
			isSameAsLastMethod = true;
		}

		// check whether this is the first test case for this class
		if (isFirstRunMethod && !isSameAsLastMethod) {
			// write package name in the 1st column
			coverage.append("\n");
			coverage.append(packageName);

			// write class name in 2nd column
			coverage.append("\n,");
			coverage.append(className);
		}

		if (!isSameAsLastMethod) {
			// write method name in 3rd column
			coverage.append("\n,,");
			coverage.append(methodName);

		for (String coveredTestCase : getCoveredQCTestCases()) {
			if (!coveredTestCase.isEmpty()) {
				// Write covered test cases in the 4th column
				coverage.append("\n,,,");
				coverage.append(coveredTestCase);

				// Write 'covered' in the 5th column
				coverage.append(",covered");
			}
		}

		for (String relatedTestCase : getRelatedQCTestCases()) {
			if (!relatedTestCase.isEmpty()) {
				// Write related test cases in the 4th column
				coverage.append("\n,,,");
				coverage.append(relatedTestCase);

				// Write 'related' in the 5th column
				coverage.append(",, related");
			}
		}
	}

		return coverage;
	}

	public List getCoveredQCTestCases() {
		List coveredTestCases = new ArrayList();

		try {
			Class clazz = Class.forName(this.getClass().getName());
			Method method = clazz.getMethod(testName.getMethodName());

			if (method.isAnnotationPresent(QCTestCases.class)) {
				QCTestCases qcTestCases = method.getAnnotation(QCTestCases.class);

				for (String testCase : qcTestCases.covered()) {
					coveredTestCases.add(testCase);
				}
			}
		} catch (ClassNotFoundException e) {
			e.printStackTrace(System.err);
		} catch (NoSuchMethodException e) {
			e.printStackTrace(System.err);
		}

		return coveredTestCases;
	}


	public List getRelatedQCTestCases() {
		// Identical to getCoveredQCTestCases except calling qcTestCases.related()
		// It could have been refactored into a common method
	}

	public boolean executeTests() {
		// Set this to false if you just want to generate a coverage report.
		// We actually determine this from test properties but that's not important to this example
		return false;
	}

	public void writeCoverageReport(StringBuilder coverageReport) {
		try {
			FileWriter writer = new FileWriter(COVERAGE_REPORT_FILENAME, COVERAGE_REPORT_APPEND);
			writer.append(coverageReport);
			writer.close();
		} catch (IOException e) {
			e.printStackTrace();
		}
	}
}

The @Rule annotation is a newer feature of JUnit 4. One built in rule is TestName which allows you to get the test name from inside a test case.

There are actually two ways to get test coverage.

The simpler method [printQCTestCaseCoverage] just writes to a log file after every test case executes. It outputs the test case name, and a list of covered and related test cases.

The more complex method [buildCoverageReport] compares the test with previous test methods and checks for mutiples in coverage to avoid duplication. It uses some ugly logic hackery to get there, and all this will actually end up refactored out, so just look at printQCTestCaseCoverage for the basics of using reflection to get the test case name and annotation.

You can now have your junit test cases extend TestBase and get a csv report of test coverage.

public class MyTest extends TestBase {

	@Test
	@QCTestCases(covered = { "QC-TEST-1", "QC-TEST-2" }, related = { "QC-TEST-3", "QC-TEST-4", "QC-TEST-5" })
	public void testSomething() {
		//implementation...
	}

	@Test
	@QCTestCases(covered = { "QC-TEST-6"} })
	public void testSomethingElse() {
		//implementation...
	}
}

This will generate a CSV report that looks like this [qcCoverageReport.csv]:

com.mycompany,MyTest,,,
,,testSomething,,
,,,QC-TEST-1, covered
,,,QC-TEST-2, covered
,,,QC-TEST-3, related
,,,QC-TEST-4, related
,,,QC-TEST-5, related
,,testSomethingElse,,
,,,QC-TEST-6,covered
,AnotherTest,,,
,,TestThis,QC-TEST-7,covered
,,TestThat,QC-TEST-8,covered
,,,QC-TEST-9,related

which ends up looking like this if you open it in Excel:

I could just as easily have included package, class, and method name on every line by eliminating some newlines. This is, cheap (hacky) report generation, but serves our purposes here. I might use a CSV library to handle things like properly escaping fields, etc. but by the time I got to that point, I had refactored the reporting completely out of the base class.

My next post will talk about how I went from reporting coverage to reporting results — which turned out to be tricker than I thought.

Integrating JUnit tests with HP/Mercury Quality Center

Integrating JUnit tests with HP/Mercury Quality Center

Part 1: initial analysis and annotation

A while back I did some work for a client who had two sets of test cases. On the one hand, they had hundreds of manual test cases documented in Quality Center. On the other hand, they had hundreds of automated test cases written using JUnit. I was given the task of correlating the two sets of test cases and determining to what extent they overlapped and, if possible, updating the test results in QC with the JUnit test case results.

This represented a significant burden for the client because:

  1. They were not capable of determining direct correlation between the tests
  2. Given the results of the JUnit tests, it would be a laborious task to update test results manually in Quality Center every week

My solution was to analyze the JUnit test cases and compare them with summaries of the QC test cases. Because I did not have sufficient domain knowledge (at first), I was not capable of properly determining correlation. However, I had the assistance of a domain expert who, although not an experienced Java developer, was sufficiently technical to be able to edit the source code and commit changes to subversion with a little help. I could interpret the source code and he could then identified the corresponding QC test case (or cases).

For the first stage, we would add an annotation to each JUnit test method: @QCTestCases

This was a simple annotation and looked something like this:

	@Target({ElementType.METHOD, ElementType.TYPE})
	@Retention(RetentionPolicy.RUNTIME)
	public @interface QCTestCases {

	    String[] covered() default "";
	    String[] related() default "";
	    
	}

This allowed us to add a list of covered test cases as well as a list of related test cases to each test method:

	@Test
	@QCTestCases(covered = { "QC-TEST-1" }, related = { "QC-TEST-2", "QC-TEST-3", "QC-TEST-4" }
	public void testSomething() {
		//implementation...
	}

My definition of “covered” was a test that definitely corresponded such that if the JUnit test fails, then the qc test case should also fail; “related” tests were a bit more ambiguious and would require further analysis.

In practice it turned out that related tests were an unknown bucket that with some refinement could be eliminated or tweaked to provide direct coverage, but it was a useful categorization at this early stage when we didn’t want to have too many false failures. In an ideal situation there would be a direct 1-to-1 mapping, but that’s seldom the case.

Once I had annotations in place, I thought it would be fairly straightforward to map the cases and report the results. Using reflection, I could get the test names and results, and then using my mapping to upload test results to QC using the OTA (Open Test Architecture) API, the HP/Mercury public interface to Quality Center.

The initial analysis was tedious work but everything else depended on getting it right. It turned out to take several iterations before we had accurate coverage and a workable solution. But the strategy was in place.

In my next post, I’ll describe how I used the annotations to get results for mapping test cases.