Why Mockito’s @InjectMocks is evil

Why Mockito’s @InjectMocks is evil:
https://lnkd.in/gDK2ktX

TLDR: It fails to inject silently

@InjectMocks will try to satisfy a dependency with a @Mock by adding to the system under test via constructor, setter, or property.

If any of those do not succeed, it will fail silently, and you won’t know why you have a NullPointerException, for example.

Better to explicitly declare and add your mocks with Mockito.mock() instead of relying on @InjectMocks to autowire your dependencies.

Automation Strategy Manifesto

Automation is key to a successful DevOps strategy

In order to accelerate release velocity from quarterly to bi-weekly we need to develop and release software in smaller cycles.

This allows us to make decisions in an agile manner and get faster feedback from users.

But the release cycle has inherent friction.

To reduce this friction and release more frequently, we will need to automate our software build and deployment processes.

By using a DevOps strategy to create create consistent automated delivery processes.

Continuous integration & delivery tools can help with this.

In order to have confidence in our release cycle, we need to be able to test quickly.

To avoid testing becoming a bottleneck, we need to have automated, reliable tests.

Traditional end-to-end functional test automation can be slow, brittle, and expensive to maintain.

So our aim is to provide thorough test coverage with unit tests.

Unit tests are isolated, target a single object, method, or procedure and are faster and more consistent than end-to-end tests.

Unit tests provide a way to mitigate technical debt by making refactoring easier, and giving you confidence to change existing code without introducing regressions.

We will use code coverage tools it identify where we can improve tests, but not as a sole metric of quality.

Since too many unit test add technical debt tool.

Because not everything can be unit tested, we will use integration and system testing to cover areas of the application not easily tested at the unit level.

Also, because integration points (especially with external system) are the most likely place for unexpected errors, we will focus on testing these areas.

Manual acceptance and exploratory testing is also necessary, and accomplishes some things better.

Because of thorough unit, integration, and system test automation, people can focus on those aspects that are more challenging to automate.

100% automation is not the goal, because sometimes that takes more time and effort.

Test automation also incurs technical debt and can make code rigid and resistant to change if you also have to change tests.

In order to have confidence in tests, we will strive to keep results “green” — no failing tests are allowed to live.

To make sure automation is useful and timely, we will create a build-test-deployment pipeline including:

  • source control management
  • mainline development & branching strategy
  • code review process on pull requests
  • static analysis tools for code quality & security
  • automated builds
  • unit testing code while mocking external dependencies
  • code coverage metrics
  • controlled and versioned code artifacts
  • ephemeral & interchangeable environments created on demand
  • automated infrastructure provisioning
  • automated deployment & configuration
  • secure secrets sharing
  • blue green & canary releases
  • post-deployment smoke testing and monitoring & metrics
  • automatic rollback & notification

Two types of tests

There are lots of diffefrent types of tests:

unit tests
functional tests
integration tests
system integration tests
ui tests
api tests
performance tests
load tests
acceptance tests
accessibility tests
security tests
etc…

But I like to group them into two primary categories:

Tests you can perform on the code itself, and tests that reqired a complete system to be deployed.

There are obviously exceptions, and it’s more of a spectrum, of what needs to be compiled, linked, combined, integrated, deployed, and what backend or front end pieces, as well as operating system or network connectivity or other external dependencies.

But most software nowadays requires some sort of deployment — to a server or device, and probably some sort of connection — to a file system, a database, or network service.

And much of it needs some sort of compilation, preprocessing, or integration with external dependencies. Even mcode written with interpreted languages like python or javascript will be written with dependence on common libraries.

While it is possible to test a single unit of code in isolation, it’s often not practical to exclude all external libraries (even if just to print or log output) and you usually need a runtime, libraries, or operating system to execute them on.

So I tend to not be too purist about unit tests vs integration tests — except to make the distinctions that my code — or the code that needs tested — should not be combined with other code that that needs tested (as part of your system) as opposed to code that you assume to be working as designed, although the quality of some external libraries is questionable, and you’re likely to encounter bugs or deficiencies in such libraries (open source or closed.)

The distinction I like to make is this:

Can the code (compiled or interpretted, linked or not) be tested before being deployed or not?

This is an important distinction, because it determines whether the test can be executed before or after said deployment — and that matters for where the test is executed in your delivery pipeline.

Another way this distinction is generally useful is — does the test matter to the developer or the end user? By which I mean not, who cares whether the test passes or fails, but who cares about the result of the test.

A developer test is a test that the developer who wrote the code is primarily interested in. The output of a developer test (whether unit or integration test) is useful to the developer of the code — the implementor — and not really to anyone else.

The assumption here is: Did the code do what I intended it to do?

This is in contrast to functional tests (to use another term which is perhaps too vague to be really useful) where the output is useful to the user (or designer) of the system under test.

That assumption is: Does the software actually do what I wanted it to do?

If the author of the code is both the designer and end user, these questions amount to the same thing, but much software is designed to be used by someone other than the developer, or rather the developer creates code designed by someone else, intended for others to use.

So the functionality of the system is actually being tested, instead of the correctness of the code itself.

To reiterate, my two general categories of testing are:

Developer tests (unit or integration) which are performed on code before it is deployed,
and

System tests (functional or end-to-end) which are performed on a system after it is deployed, which general have external dependencies on the file system, network, or other services (database, etc.)

That’s not to say that there aren’t more categories of tests, or that you can’t do functional testing with mocked external systems — or developer tests which require external dependencies. But that these two grouping are generally useful to think about for 3 reasons:

  1. Who cares about the test?
  2. Does the test have external dependencies that require additional setup?
  3. Can the test be performed before or after a deployment?

These two type of tests usually have other attributes:

The first group are small, fast, and simple
The second group are larger, slower, and more complex

This makes another advantage of developer tests that they can be executed frequently, and relied upon for repeatability, and are easy to debug (generally targeting a single object or function).

And to be fair, functional (or system) tests also have an advantage that they perform a real world scenario that development tests may not be able to anticipate, and they are concerned primarily with outcomes, not implementation.

Working with large datasets in Python

How can you work with large datasets in Python — millions, billions, or more records?

Here is a great answer to that question:

https://www.quora.com/Can-Python-Pandas-handle-10-million-rows-What-are-some-useful-techniques-to-work-with-the-large-data-frames

  1. Pandas is memory hungry, you may need 8-16GB of memory or more to load your dataset into memory to work efficiently.

You can use large / extra large AWS cloud systems for temporary access. Being able to spin up cloud platforms on demand is only one part of the equation. You also need to get your data in and out of the cloud platform. So persistent storage and on-demand compute is a likely strategy.

  1. Work in stages, and save each stage.

You will also want to be able to save your intermediate states. If you process CSV or JSON — or perform filter, map, reduce, etc functions you’ll want those to be atomic processes.

And you’ll want to persist work as you go. If you process 100 million rows of data and something happens on row 99 million, you don’t want to have to re-do the whole process to get a clean data transformation. Especially if it takes several minutes or hours.

Better to save each stage iteratively and incur the IO cost in your ETL or processing loop than to lose your work — or have your data corrupted.

  1. Take adavantage of multiprocessing

Break work into batches that can be performed in parallel. If you have multiple CPUs, take advantage of them.

Python doesn’t do this by default, and Pandas normally works on a single thread. Dask or Vaex can work in parallel where Pandas itself cannot.
You might also consider using a Streaming processor such as Apache Spark instead of doing all your processing in a single DataFrame.

  1. Use efficient functions and data objects

Earlier I talked about saving incrementally within your processing loop. But don’t go overboard.

You do not want to open and close files every iteration in your inner loop millions of times. Make sure you fetch the data you need only once. Don’t reinitialize objects. Even someting as simple as calling len(data) can really add up. Finding out the length of an expanding list with millions of rows millions of times really adds up.

Also, consider when you want to use a list vs a numpy array, etc.

Test Automation isn’t for everyone

I once knew a guy who was a talented craftsman, he could make beautiful hand crafted furniture. His business became popular and demand increased. He got investment and built a large shop, hired a couple assistants and bought addition machinery so he could help meet the demand.

The business was a big success and grew even more. He was almost able to pay back his loan in just a couple years, but then sold his business (which is still successful today under new owners) and went back to working alone on individual orders. He still does quite well and with the proceeds from the sale has a comfortable life, but nothing like it could have been if he’d kept the company.

It turned out he wasn’t as interested in running a manufacturing business as working with his hands, alone, in his little shop in the woods.

Testers (and their bosses) should consider this before thinking about switching from manual QA to automation. It takes different skills and a different mindset, which can be learned, but may not be what you enjoy.

I happened to enjoy the change myself, but I can sympathize with those who don’t.

Challenges inheriting an existing test framework

This post started as a comment on to the following discussion on LinkedIn.

Inheriting a framework can be challenging.

First of all, because a real world framework is more complex than something just created. There is bound to be exception cases and technical debt included.

Secondly, an existing test framework was built by people on deadlines with limited knowledge. There are bound to be mistakes, hacks, and workarounds. Not to mention input from multiple people with different skillsets and opinions.

In this case, the best way to understand your framework is to understand your tests. Your tests exercise the framework. Pick one of low-moderate complexity and work your way through it, understanding the configuration, data, and architecture.

Don’t be afraid to break things. After all, you have all these tests that will let you know if you broke the framework.

Lastly, having a checklist of things to look for, good and bad practices will help you understand the framework better — and help you know what quality of framework you’re dealing with. Is this mystery function really clever or just a bad idea?

Look for standard patterns. OOP, Page Objects, etc. Also look for common problems – hard coded values, repetitious code, etc.

Test Automation Can’t Catch Everything

I remember a time, years ago, when I was working at a company at which I learned a lot about my craft.

Selenium was fairly new and I was one of the early adopters. I’d developed a pattern for structuring tests that I shared with the community and found that several others had independently developed ideas similar to my own “Page Objects.”

Agile was just beginning to penetrate mainstream development, and was at the same time attracting some healthy skipticism. Pair programming and test driven development were considered “extreme” and other patterns like Scrum were thought of as either common sense or a cargo cult.

Continuous Integration was still a novel idea, although my first job at Microsoft was essentially as part of a manual continuous integration process, no tools yet existed at the time to accomplish it. Now there were open source project like Cruise Control and Hudson (which became Jenkins) coming out.

And while I’d been involved in and an advocate of each of these — Selenium, Agile, and Continuous Integration, I’d yet to see them widely adopted and successfully implemented within an organization.

But about the time I came back from Fiji and stepped back into software develpment, all these things were starting to coalesce. At least they were for me in the mid 2000s.

We had a cross functional team of (mainly) developers, testers, sysadmins, designers, analysts, all working together We had business & customers giving feedback and acceptance criteria. We wrote user stories and assigned points. We wrote tests first and testers paired with developers. We sat with customers and took usability notes. We talked design patterns and had self organizing teams. We had fun and I learned a lot. I still keep in contact with some of my co-workers from more than a dozen years ago.

It was about a year and a half into it that we started to hit the wall. Tests were taking too long to execute. Complexity was slowing us down. We kept plugging along and using technology to fight the friction.

Parallel builds, restructuring tests, virtualized development environments. We fought technical debt with technical solutions and beat it back down.

I thought we were winning. But I had a manager, an old school QA guy, who knew that something was off. I think the rest of us were too busy churning out code, delivering features, spinning our wheels to see it.

But he saw it. He tried to alert upper management, he tried to get through to developers. We had lots of automated tests. We were deliver features every two weeks. Occasionally we’d stop and do a spike to experiment, or take some time off to reduce tech debt, refactor code, or rewrite tests. But we still had decent momentum.

Finally he got a together a group of stakeholders. Give me 15 minutes, he said, and I’ll break it. It took him maybe five. And it wasn’t that hard. And it was something users could be expected to do.

The moral of the story is: You can’t catch everything with automation. While automation is good, it frees up time to do other things, it allows feedback to occur faster, it helps testers to concentrate on finding issues instead of verifying functionality. It helps develpers to think about how they’re going to write their code, to reduce complexity, and to define clean interfaces.

Test Automation is like guard rails. It can help you from falling off the edge, or it can give you something to hold onto as you traverse a tricky slope.

It can catch obvious problems, much like the railing that helps you to not fall off the ledge (you’re not planning to do that anyway.) Anyone can walk along a narrow path and not fall off a cliff, test automation makes it so that you can run. But it won’t stop you from going off if you go past it.

In order for tests to be effective you need to have clear requirements, and then you need to explore beyond them. Test automation isn’t good at exploring. It’s good at adding guard rails to a known path.

And even if you catch all the functional problems, you need to be able to check performance, ensure security, and test usability.

If you have testers churning out tests, both automated exploratory tests, but no one else is involved, or hears their feedback, it’s not going to matter much. It’s like having guard rails and warning signs that everyone just hops over and walks right past. It’s no good finding bugs if they don’t get fixed. And eventually, if you have a bug backlog that’s too big, people will just ignore it.

You need test automation, but you also need exploratory, human testing. And you need everyone — not just testers to be concerned with quality. Quality code, environments, user experience, and requirements.

So use test automation for what it does best — providing quick feedback for repetitive, known functionality. Don’t try to get it to replace a complete quality assurance process.

There are two types of tests

Here’s a great question from a fellow test automation expert:

What are your thoughts on doing test automation by developers vs testers?

I think there’s a place for both. (That’s the politic answer.)

But in general, I categorize tests into two main types: developer focused tests, and user focused tests.

The goal of developer focused testing is to help the developer keep their code organized, to verify what they write does what they intended, and to allow them to mitigate technical debt.

Unit tests are the obvious example here. It shows that function x returns y given condition z. Which allows them to have confidence that they did what was expected, they accounted for exceptions and edge cases, and helps with establishing clean interfaces and aids in refactoring.

But there is also a lot more developer logic in front ends these days, so they may write some UI tests — particularly if they are front end developers and they are primarily concerned with the UI. Which usually requires data and user events. It’s great if they can isolate and mock this to test UI components individually, but they also need to be tested for correct integration.

So developers writing UI test automation does make sense.

But…

The other type of testing, user testing / acceptance testing / functional testing / end-to-end testing / regression testing / exploratory testing / quality assurance — whatever you want to call it (and these are overlapping buckets, not synonyms) — is based on the principal that you can’t know what you don’t know.

A developer can’t write a test that checks that a requirement has been met beyond what he understands the requirement to be. You can’t see your blind spots.

And a developer’s task is primarily creative — I want to get the computer to do this. They aren’t thinking in the mindset of what else could go wrong, or what about this other scenario.

It’s like having an editor look at your writing (I could use an editor). You’re too close to it to look at it objectively.

That’s not to say you can’t go back to it with fresh eyes later, take off your “developer” hat and put on your “tester” hat. But it’s likely that someone else will see different things. And looking at it from an object (or at least different) perspective is likely to identify different issues.

Some people are naturally (or trained to be) better at finding those exceptions and edge cases. My wife calls that type of person “critical” or “pessimistic” — I prefer the term tester.

Regardless, the second type of test — the big picture test, that looks at things from a user’s perspective, not a developer’s perspective, is — I think — critical.

And the industry has assumed that for decades. How that is accomplished, and how valuable that is has always been up for debate, but I think the general principals are:

  1. That there are two types of tests: those designed to help the developer do his work, and those that are designed to check what he has done.
  2. That there are two different roles: the creative process of making something work (development) and the exploratory process of making sure something doesn’t go wrong (testing).

Anyway, that’s my (overly) long take, summed up. I could go on about this for hours.

On Customer Development

I was recently asked about Customer Development as a process. I looked it up to see what the formal definition is, and concluded that I don’t know too much about the official “Customer Development” process, but I understand and practice the general principles of Customer Development in my own business.

Here my reply:

Do you know how to do customer development?

Not in a formal way. But I have two strategies I use for finding & acquiring customers.

  1. In areas I have expertise (such as software testing & delivery) I’ve built a strong network and reputation by publishing articles & tutorials and offering training videos and meetups.). This works great.
  2. In areas where I am not an expert where I have built products on a larger scale, I have relied on intuition and research to discover needs. I then build a customer base as I develop the product through social media and direct contact “growth hacking” with a freemium model.

With Resumelink, for example, I contacted individuals I thought would benefit, and offered the service before it was build — and manually did the steps before having a product, and got feedback on features. I initially built it for myself, but with some keyword search I realized the market had a hole.

So I think it takes an initial spark of inspiration to identify a need, with research to see if there is a market niche that can be filled and determine it’s profitability, followed up with a proof of concept that is more manual process that a product, and then targeting your perfect customer to identify ways to refine and improve the product — developing it by small feature increments as the needs become apparent (and keeping other ideas on the backlog), and then growth hacking through social media, content delivery, and community outreach.

Checking state in Selenium Test Automation

I wrote a medium length novel in response to Nikolay Advolodkin’s post about a common Selenium automation pattern. He advocates:

STOP CHECKING IF PAGE IS LOADED IN AUTOMATED UI TEST

You can read his article on his site, Ulimate QA: https://ultimateqa.com/stop-checking-if-page-is-loaded-in-automated-ui-test/

Or follow the discussion on his linkedin post: https://www.linkedin.com/posts/nikolayadvolodkin_testautomation-java-selenium-activity-6674278585743761408-Ud_p

Nikolay is a good friend, so please don’t take this as an attack. We have had many long discussions like this.

Here is my response in full:

I go the other direction on this. I use element checks to verify that the page is loaded.

My version of the Page Object pattern includes an isLoaded() method — which can be overloaded with custom checks as needed. This is to try to keep things synchronized, even though it means extra steps. In this case, I value stability over performance.  

I can understand someone making another decision however, especially when speed is important and latency between steps with a remote driver makes this more costly.

In practical terms, you could just check if the element you want to interact with is available and fail faster if it is not. The result of both success and failure would be the same, and you’d get there slightly faster — perhaps significantly faster if you have a long sequence of many page loads. But having such long test flows is a pattern I try to avoid, unless I’m explicitly testing the long flow through the UI.

Adding the sanity check helps me when it is time to debug the test or analyze the test failure. Knowing that my page is not loaded — or that I’m not on the page I expected helps me to understand the true cause of failure, the page is not loaded, rather than an element is not clicked.

However, I would not call isLoaded() repeatedly, only once, automatically when a page object is initialized, or explicitly if I have a logical reason to think that it is not — some state change.

Selenium tests (and UI tests in general) are brittle, and determining state before attempting to perform an action is one of the biggest challenges.

The challenge here is that an HTTP 200 status code doesn’t really mean a page is loaded anymore, with dynamic page creation, javascript frameworks, single page apps, and prefetch, this is hard to tell. Pages can load in chunks, dynamic elements can be added, and sometimes the concept of a “page” doesn’t even make sense.

Checking status codes or XHR ready state are meaningless (or at least misleading) in many modern web applications. But you see people trying to do this figure out the state of their app, so they can reliably automate it. This usually doesn’t work. So checking the state of the element you need to interact with makes more sense as well as saving time.

The WebDriver team dropped the ball on this — or at least punted. Selenium used to check that a page was loaded (using the methods above) but decided the decision was too complex and left it up to the user. I think this was an abrogation of responsibility — but don’t tell Jim or Simon that. It’s a less discussed detail of their least favorite topic.  

Validating state is hard, and most of the time, leaving it up to the user results in bugs. It’s even harder with mobile apps and the Appium team has had to make many difficult decisions about this, and sometimes a framework gets it wrong, or makes things unnecessarily slow.

So like most things there is a trade off between speed and reliability, and we all need to make our own decisions.

When you adopt the “page object” pattern for use on components that are only on part of a page, or may appear on multiple pages, having the explicit check that is user defined starts making even more sense — because widget.isLoaded() is a state check that can happen more than once, and not just a sanity check

But when you have a (reasonably) static page, an initial check that the page is loaded — rather than checking each element that you can safely assume *should* be there if the page is loaded can actually be more performant, as well as providing a clearer stack trace when things aren’t as expected.

Repeatedly checking if a page is loaded before performing any action is a bad idea in any case. 

public class MyPage {
 public MyPage open() {
  driver.get(this.url);
  if (isLoaded()) { return this; }
 }
public bool isLoaded() {
  get {
   try { driver.findElement(myElement): }
   catch (NoSuchElementException e) { 
    throw new MyException("page isn't loaded", e); }
  }
 }
}