Here is my response to the question posted on Quora “What is performance testing?“
There are a lot of different activities frequently rolled into the term “performance testing.”
Generally, performance testing means testing the performance of your system – to give a tautological definition, which illustrates the ambiguity of the term more than it illuminates the process.
Some other terms often associated with performance testing are “load testing”, “stress testing”, & “scalability testing.”
Like all testing there are two completely different goals to performance testing: verification and discovery.
Measuring performance is the verification step, but there are many different things that can be measured including (but not limited to): response time, concurrency, latency, CPU load, memory usage, disk usage, etc.
Different loads can affect the application in different ways, so it’s usually not enough to just throw a bunch of requests at your system. The goal is to understand how your system performs under these loads, how many resources it consumes, how many users it can support at the same time, and how fast it can perform each action.
Simulating loads is probably the most challenging part of performance testing, because you can’t really have hundreds or thousands of users hitting your system in test. Besides the prohibitive cost, you may not have the physical resources in test to handle a realistic production load. Also, it is difficult to anticipate (and hence simulate) all real world scenarios. The variety of scenarios — things like network latency, variety of clients, unanticipated (or malicious) user actions, are very hard to cover.
I can think of one instance where a company went to great expense to duplicate their production hardware and network in a test environment for performance testing, only to be foiled in their measurements by an unanticipated variable: distributed content caching networks like Akamai.
The other side of the coin of performance testing from measurement is exploration. In this aspect, you’re trying to measure the limits of your system — where it breaks. The same techniques and instrumentation can be reused from the measurement process, but here you’re not trying to see how the system works, but to break it under load, discover how it breaks, and what the repercussions are.
Unusual things can happen, for instance, when you run out of memory. Performance can drop from hundreds of responses per second to several seconds per request once a memory threshold is reached and you switch to swap or start getting out of memory errors. These repercussions can potentially affect security, and even other systems.
Not everything in performance testing is about achieving a certain load, however. Some tests (and some issues, such as memory leaks) can only be found after running for an extended period of time under load.
Even some usability issues may be uncovered in performance testing. For instance, you may implement safeguards to prevent DOS (denial of service) attacks but find that they can adversely affect users behind corporate firewalls (who may share an IP or IP range) or search engine bots that are critical for SEO.
In short: performance testing is a lot of things — and determining what your goals are — both for verification and discovery — is a large part of the task. Determining how you are going to simulate load and measure performance are the other main tasks.