Stressless Load Testing

All About Performance Testing & Tools for Web, Mobile and API

Eradicating Load Testing Errors - Part 2: Mastering Correlation

This post, written by the StresStimulus team, is a reprint of the article originally published in the Summer 2016 issue of Methods & Tools magazine.

What you will learn        
This is the second of a two-part article about load test errors. Part One explained the genesis of correlation errors and how to distinguish them from other performance testing issues. It also clarifies why this type of problem is the toughest to deal with.

In this part, you will learn how to fix these issues with the Comprehensive Correlation Method. This strategy de-mystifies the struggle with dynamic parameters and describes a methodology of configuring them in all your tests. It teaches how to eradicate correlation problems in the load testing of web applications on any platform or framework, including those that are currently considered untestable due to unresolvable correlation.

Who should read this
Developers, QA specialist and performance engineers who want to master performance testing and specifically learn how to correct test configuration errors.

What to expect
If you are stuck with a test that cannot pass a run with a single virtual user, this approach will help you to find hidden dynamic values, create missing extractors and parameters and prepare your test script for error-free runs. By mastering this method, you will gain the confidence to correlate any test case that comes your way.

Winning strategy for testing: there are several factors that make learning and adopting this method easier and more rewarding

  • Never stops working. This method was used on hundreds of applications and dozens of platforms and frameworks, and it never failed to correlate a test scenario. Even when we load tested hard to crack applications such as Oracle Hyperion or Microsoft Dynamics NAV, it always worked.
  • Complete and Step-by-step. This material contains everything you need to know to start learning and using this strategy today. It describes a complete sequence of steps to work your way through fixing correlation problems in your performance tests one by one.
  • Application, framework and platform independent The problem-resolution process and the algorithm behind it are not predicated by any particular framework or application-specific design. It is based on general principles of client/server systems, HTTP, and web browsers, described in Part One, and therefore it is set to work in all environments.
  • Tool-independent. It can be carried-over with any performance testing tool of decent quality. This doesn't mean that the effectiveness of using this methodology is the same with all software products. Some of them can provide greater simplicity, productivity, and convenience than others. But the methodology itself is a sequence of well-defined data manipulation steps that do not rely on any proprietary "know-how" in a particular load testing tool.

Let's dive into it.

TAFT approach for a tough problem. At the heart of the Comprehensive Correlation Method is a known strategy called Test Analyze Fix and Test (TAFT). It is a systematic iterative process that incrementally improves a solution or a product, also known as the reliability growth approach which is used in manufacturing in other areas.

A recorded test script can initially have any number of correlation issues that can often overwhelm you, and you may wonder where to start. The TAFT approach keeps you focused on one task at a time. On every iteration of the process, we will concentrate on one error and create a correlation rule that will fix it. The process continues until all missing rules are created and all errors are cleared.

The Comprehensive Correlation Method at-a-glance

Let's define five key entities that are important to understanding the correlation process:

  1. Dynamic value (variable string) is a value in one or several requests that is related to a corresponding value generated on the server which was sent in a previous response. On a subsequent test script replay, the server being engaged in interaction with a virtual user creates a new value. It then expects to receive the new value in the requests.
  2. Correlation error is an application error caused by the server receiving a request with a stale recorded value instead of a new one. As a result, the server responds with either an explicit error message or some other irregular content not found in normal responses.
  3. Extractor is a search rule that defines how to locate the dynamic value in the server response.
  4. Parameter is a find-and-replace rule that defines how to find the recorded dynamic value in the request and replace it with an extractor.
  5. Correlation rule is a combination of an extractor and one or several parameters that work together to inject proper strings in requests to resolve correlation.

For more background on these five entities and to better understand the underlying processes in web-based client/server systems, check Part One.

Let's formulate the correlation challenge in terms of these five entities: because a dynamic value is handled by the test script as a static value, the application integrity on the server breaks which leads to the server throwing a correlation error. For example, a transaction that was completed during recording was not completed on replay, or a report was not generated as expected. To fix this, you need to create a correlation rule which comprises of an extractor and parameter.

Now, let's sort out the correlation solution in the context of the TAFT approach:

Fig. 1. TAFT approach in fixing load test correlation errors

Test means replaying your script with one virtual user to discover correlation-related errors. In load testing tools, this operation is typically called verify, check or debug test. All testing software should have its way of finding correlation errors. If the software you're using cannot adequately detect them, make sure that it at least exposes the content of the requests and responses. Use the hints provided in the section Signs of potential correlation errors in Part One of this article to find such issues manually. Once you have the list of the issues detected in your script, focus on the first occurrence in the earliest response, and ignore all others for the time being.

Analyze means tracking down a dynamic value causing the correlation error at hand. This step is described in the next section Searching for a dynamic value.

Fix means creating an extractor and one or several corresponding parameters. These steps are outlined in the sections Creating an Extractor and Creating Parameter(s) below.

Test means verifying the script again. The error that you were focusing on this iteration of the process should be resolved. Often, a single correlation rule will fix several errors. For example, fixing a token in an authentication request will fix all subsequent login failures.

Continue advancing through these steps until verification returns no further issues. After that give yourselves some kudos - your test is ready to run.

If at least one error was not fixed on any particular iteration, it means one of the following: the dynamic value was incorrectly chosen, the extractor or parameter were created incorrectly, or there are more correlation rules are necessary to fix the issue.

Searching for dynamic values

Once you establish the first response with a correlation error, search for variable strings in the matching request, as described in the next section Finding dynamic values in request. If you find it, proceed with creating an extractor and a parameter. If you didn't find it, then check previous requests. Form a loop and work your way back until you locate the session with the relevant variable string. Skip static requests to images and style sheets that are typically not relevant to the application logic. Sessions with content type text/* or application/* are likely to be relevant. On each loop’s iteration, follow the steps described in the next section.

Finding dynamic values in the request

There are two tactics to finding a dynamic value in a request: the single-recording technique and the double-recording technique. Select either of them based their pros and cons (see below) and your particular situation and preferences.

Single-recording technique. This is the simpler of the two techniques. It consists of:
  • Recording a test scenario in the browser, and creating a recorded sequence of HTTP sessions;
  • Replaying the recorded script with your tool and creating a replayed sequence of sessions;
  • Comparing the two sequences using a 4-step algorithm to identify variable strings, described next

Fig. 2. Finding a Dynamic Value: Single-Recording Technique

The 4-step algorithm

Note: Step 1 in the single-recording technique involves some guessing and sometimes you can fail to locate the variable string. In this case, use the double-recording technique that is based on a deterministic algorithm.

Step 1. Identify a suspect. Visually inspect the recorded request and try to find a string that may be a dynamic value. It should look like an identifier of some sort. Pay attention to long numbers or alphanumeric strings, or GUIDs, or values immediately following keywords such as session, id, token, or state. It may also look like a long unreadable string that possibly encodes an application state or the hash of application data. An example is ViewState in ASP.NET applications. Longer variable strings are easier to guess as they more likely to stand out.

Note: In many publications about correlation, the approach in step 1 is used to find dynamic variables. However, this method by itself is not accurate. Therefore, the following three steps are used to qualify the suspect to see if it is in fact the variable string.

Step 2. Compare, must be the same.
Compare the suspect value with a similarly positioned value in the replayed request. If they are the same, then the suspect passed this qualification, and you can move to step 3. If the recorded and replayed values are different, then this suspect failed the qualification because while it is dynamic, it is already correlated. In this case, go back to step 1 to locate a different candidate in this request, or start searching for candidates in the previous requests.

Step 3. Search in responses. Try to find the suspect value in one of the recorded previous responses. If you find it, then the suspect passed this qualification too so you can move to step 4. Otherwise, the searched value is not generated on the server, in which case look for another suspect. Keep in mind that the requests’ values may be encoded, so their exact match might not be found in previous responses. Here are a three examples:
  • If URL encoding is used the value, 'mary poppins' will be encoded to 'mary%20poppins'.
  • HTML encoding encodes special characters like '>' to '&gt'.
  • Sometimes non-alphanumeric characters are encoded using its hex-encoded ASCII code, prefixed with \x. So '#' is encoded as '\x23'.

A good rule of thumb is, when searching for suspected values in responses, include only alphanumeric characters in the search as they are less likely to be encoded. Also, if your traffic is encrypted and / or compressed, your tool must decrypt/un-compress responses for creating an extractor.

Step 4. Compare, must be different. Compare the suspect value in the recorded response with a similarly positioned value in the replayed response. If they are different, it is a dynamic value that you need to correlate. Congrats! Move on to creating an extractor. Otherwise, the suspect is not a dynamic value so look for another candidate.

Double-recording technique. As its name suggests, it involves recording the same test scenario twice to create two recording sequences of HTTP requests. Now in step 1, instead of guessing dynamic values, you can compare two recorded requests and designate all strings which are different as suspects. This way you can never miss a dynamic value even if it doesn’t fit the typical variable string criteria, such as being only a few characters in length. The remaining three steps of the process are the same.

Fig. 3. Finding a Dynamic Value: Double-Recording Technique

The two techniques to finding a dynamic value can be compared as follows:

Single-recording technique
  • Pros: Fewer steps
  • Cons: Step 1 requires guessing and can fail to find a variable string in complex applications
  • When to use: In simpler applications, when the tester has more experience
Double-recording technique
  • Pros: All steps clearly defined, no guessing is involved, works in all applications
  • Cons: More steps
  • When to use: In complex applications, when single-recording technique is not effective

Creating an Extractor

The next stage is to build a proper extractor that locates and returns the parameter value from the response. Step 3 in the previous section describes how to find a response where the extractor should be created. If you found more than one response with the dynamic value, then create the extractor on the first one. To create an extractor, you need to define a rule that will select the known recorded dynamic value from the recorded response. The following rule types are supported in various load testing tools:

  • Text-delimited - extracts text between a given starting (left boundary) and ending (right boundary) text
  • Regular expression - extracts a regular expression search pattern
  • Web forms - extracts the value of a specified field in the web form
  • Headers - extracts a value of the specified response header
  • XPath – extracts result of XPath query from XML
  • JPath - extracts result of Jpath query from JSON
  • Hidden Field - extracts a hidden field in a form
  • Selected Option - extracts a dropdown option in a form
  • Tag Inner Text - extracts inner text of an HTML tag
  • Attribute Value - extracts an attribute value from an HTML tag

The actual mechanics of creating an extractor depends on your performance testing tool. In some of them, you need to write a script to create an extractor. In some others, you can use the UI to select one of the supported rule types. For example, creating a JPath extractor can be as easy as selecting the dynamic value in the JSON response and clicking a button (Fig. 4)

Fig. 4. Example of Creating an Extractor

Creating a Parameter

The next phase is to create one or several parameters that will use the extractor. By now we have all the necessary information that you should know to implement the parameter: the request, the dynamic value, and the extractor. Again, the actual mechanics of creating the parameter depends on your tool. In some of them, you must write a script, while some others tools offer a point- and- click interface. It can be as easy as shown in Fig. 5 where you just highlight the parameter, right click and select an extractor, from a list, to replace it.

Fig. 5. Parameterization Example

Often, a dynamic value is used several times across requests. Always check if the parameter that you just created needs to be replicated by searching for all instances of the dynamic value and replacing them with the parameter. If your tool has a find-and-replace feature, this process should only take a few clicks.

A real word example of using the Comprehensive Correlation Method for load testing Microsoft Dynamics NAV is provided in this post

About your Load Testing Tool

Remember, you never battle performance testing challenges alone because your load testing tool is your friend. It helps every step of the way to navigate the process described here.

Why you need a good tool. You do not want to correlate your test all by yourself. A good tool will boost your performance, make your process less stressful, eliminate the potential for mistakes, and reduce configuration time, in some cases even by an order of magnitude. Keep in mind the following factors when comparing performance testing tools.

Autocorrelation is important. In reality, no tool has autocorrelation that can handle all dynamic values in every web application. Some of them better support one application but not others, and some of them, like JMeter, do not have autocorrelation at all. The goal is to find the product which can best correlate your test script. The better autocorrelation engine your tool has, the less work you have to do.

How to select a load testing tool. Let's analyze what procedures in the method described above should be completed automatically. You will be best served if you equip yourself with a product that has:
  • a verification feature that will playback the recorded test case and highlight all correlation issues
  • the ability to compare two sessions' (request/response) header, query string, and body, whether it be recorded and replayed or a pair of recordings, in a way that clearly underscores all differences between compared content
  • the ability to search the entire test case for a given pattern in any part of an HTTP message and highlights all found occurrences
  • multiple extractor formats and options for accurate runtime extractions including support for WCF binary, JSON, XML, HTML encoding, and URL encoding
  • a simple UI to create a parameter with a find-and-replace feature to propagate similar parameters across the entire test case

The bottom line: not all tools are created equally. Do your research.

With this method, the right tool and some practice, you will empower yourself to successfully correlate every load test that you face.
blog comments powered by Disqus