HomeSUPPORT QUESTIONS

Need help with StresStimulus? Start here.

Concurrency mode Messages in this topic - RSS

sachpatel
sachpatel
Posts: 8


10 days ago
sachpatel
sachpatel
Posts: 8
Hi,

This question was with regards to replaying scripts and customising the "Concurrency mode" option. I understand there are a couple of options at the script level that can be used (Concurrency mode: Content-type, recorded concurrency). I have tested both options using the following scenario: 3 users, 2 iterations. These are the tests I ran:

1) Concurrency mode = Content-type (at the script level)
2) Concurrency mode = Recorded Concurrency (at the script level)
3) Customised the concurrency mode to "parallel" for a couple of transactions (at a transaction level)

Looking at the 95th percentile, the response times varied differently compared to what is captured in Dev Tools when going through the transactions manually.

1) Much slower
2) Much faster
3) More accurate

What is the best/quickest way to determine the script is replayed in an accurate manner to obtain more realistic response times when running as part of a load test? It seems very time consuming to have to go through each transaction with Dev tools open to determine which transactions are executed in parallel, sequential, etc.

Thank you.
0 link
George @StresStimulus
George @StresStimulus
Administrator
Posts: 378


10 days ago
George @StresStimulus
George @StresStimulus
Administrator
Posts: 378
Dev tools capture the request/response time (server + network time) and page rendering with javascript processing time (client time).

Load testing only measures times that depend on the load/concurrency of shared resources (server + network time).

However, you can configure StresStimulus to emulate client time by adding recorded delays into the script as described here.

I would recommend using recorded concurrency mode with adding the emulated client delays for the most accurate results.


- Cheers
+1 link
sachpatel
sachpatel
Posts: 8


9 days ago
sachpatel
sachpatel
Posts: 8
Hi George,


Thank you for the your response.

I understand the differences you have highlighted. However, what I have observed when replaying a script is responses times reported as being higher than that seen when running the same steps manually and captured the responses times from DevTools. From your explanation, DevTools should be reporting a higher response time since it includes the client side rendering.

The same thing is also observed when running the script during a load test. SS reports the average response time being 60 seconds (for example). But when I manually navigate to the page, whilst the test is running, it doesn't look to take 60 seconds to load.

The above two examples have been observed whilst having the Concurrency mode set to "Content-type". However, this led to my initial post when running a POC with different settings to observe the response times captured. And it was inconsistent based on the following factors: re-running an older version of the script compared to a newer version and the time of day. We just want to ensure the method we use is accurate and consistent.

Can I confirm when selecting "Recorded Concurrency", this will automatically determine whether the requests for a transaction as sequential or parallel? Does the Set-Cookie property also needs to be set to "No"?
0 link
George @StresStimulus
George @StresStimulus
Administrator
Posts: 378


9 days ago
George @StresStimulus
George @StresStimulus
Administrator
Posts: 378
Let me unpack it:

  • Regarding your visual observation:
  • StresStimulus measures: StresStimulus page response time and visual impression of page responses can be apple and oranges. Here is one reason for that. StresStimulus measures time between the first and last request. On the other hand, when you visually determine that the page is loaded, you do not know whether any resources are still being loaded and whether the page DOM below the fold is still being generated.


  • Regarding DevTools and StresStimulus time:
  • Response time is a product of a client sending requests and the server sending back responses. To investigate your findings, you need to compare the behaviors of DevTools and StresStimulus. Execute a test with one VU - one iteration to provide the same server load as with DevTools. Then compare the waterfall diagrams in StresStimulus and Dev tools. StresStimulus waterfall diagram is described here. Then execute StresStimulus tests with various concurrency settings and see the difference

  • Regarding StresStimulus Concurrency modes:
  • You are correct that Recorded Concurrency determines requests are sequential or parallel based on the recorded waterfall. A request is parallel if its response comes after the next request is sent, otherwise, it is sequential.
    The Set-cookie header enforces sequential requests property is described here. You can set it to No if you have many responses with Set-cookie header that is causing sequential requests and you wish to send them in parallel, however, make sure setting this property doesn’t break your script.


    - Cheers
    +1 link
    sachpatel
    sachpatel
    Posts: 8


    8 days ago
    sachpatel
    sachpatel
    Posts: 8
    Hi George,

    Thank you for the detailed explanation. It all makes sense.

    One more observation. Is there a best practice in terms of recording a script to capture accurate concurrency for the application? Because when I do different recordings of the same steps, the waterfall diagrams are not necessarily the same. This is the same observation within DevTools. Does SS have a way of dynamically handling the way requests are processed different with each iteration?

    We recently faced an issue where scripts were ported from another test environment and a different version of CRM. In that scenario it would be worth checking the waterfall diagrams before re-scripting? Because the scripts work fine in the new environment, after using the "Remap Hosts" option.
    0 link
    George @StresStimulus
    George @StresStimulus
    Administrator
    Posts: 378


    7 days ago
    George @StresStimulus
    George @StresStimulus
    Administrator
    Posts: 378
    sachpatel wrote:
    Is there a best practice in terms of recording a script to capture accurate concurrency for the application?


    StresStimulus recorder is designed to automatically records correct concurrency.


    sachpatel wrote:
    Because when I do different recordings of the same steps, the waterfall diagrams are not necessarily the same.

    You are correct, two recorded waterfalls won’t be exactly the same due to multiple random factors. So the response time will not be exactly the same. However, the waterfall should have sufficient information for a rough comparison of the browser and StresStimulus concurrencies.


    sachpatel wrote:
    Does SS have a way of dynamically handling the way requests are processed different with each iteration?

    Different in what way? Can you please clarify the question. StresStimulus handles requests according to the recorded script, your script settings and your real-time server responsiveness.


    sachpatel wrote:
    We recently faced an issue where scripts were ported from another test environment and a different version of CRM. In that scenario it would be worth checking the waterfall diagrams before re-scripting?

    The main reason for checking the waterfall diagrams is if you want to double-check that your script settings are correct. Once your script is configured, you can run your test multiple times without changing settings or checking the waterfall diagram.


    - Cheers
    0 link
    sachpatel
    sachpatel
    Posts: 8


    7 days ago
    sachpatel
    sachpatel
    Posts: 8
    sachpatel wrote:
    Does SS have a way of dynamically handling the way requests are processed different with each iteration?
    Different in what way? Can you please clarify the question. StresStimulus handles requests according to the recorded script, your script settings and your real-time server responsiveness.


    I was referring to how the waterfall model can differ with each recording.


    I have sent an email to the support team after running several tests with different settings, as recommended, and provided the outcome. If you could please refer to that and provide any further suggestions of options to try, that would be much appreciated. Thank you.
    0 link
    George @StresStimulus
    George @StresStimulus
    Administrator
    Posts: 378


    6 days ago
    George @StresStimulus
    George @StresStimulus
    Administrator
    Posts: 378
    Since you contacted support they will reply directly.


    - Cheers
    0 link






    Copyright © 2021 Stimulus Technology