Skip to main content

Running Load Tests on Microsoft Azure with IP Switching feature

One year ago I started to investigate how we could use Load Test feature that is available on Visual Studio 2013 Ultimate and Microsoft Azure on our own project. The main idea was to develop a mechanism to test complex scenarios (especially written in C#) that can simulate 10k, 20k and even 100k users.
The idea was accepted by client and we end up with a great and interesting application that can block a system witch worth millions. This is great from both sides. Mission complete for the team that defined the load tests, because they were able to define the load tests. But in the same time great for the development team and especially for the support team because they know the limits of the system.
Now, we stuck with another problem. Because we are using Microsoft Azure infrastructure to run Load Tests (having our own hardware would be to expensive), we cannot simulate calls from multiple IP address.
This is a current limitation Visual Studio Online - IP Switching feature is not yet available. What does this mean? Basically you cannot run/configure a Load Test from Visual Studio Online to use multiple IPs during load test.
At the beginning, this feature was not very important for us, but now we realize that we would like to have this ability.
What we could do?
Based on the feedback that we received, we have a pretty good option. We can configure Test Controllers and Test Agents for this purpose. We’ll need to create VMs on Microsoft Azure that will play the role of Test Agents. This machine will be uses to run the load tests. Because each machine has a different IP, it will be pretty simple to distribute the tests on this machines.
Next, I would like to go more dipper and discuss a little about this solution.
Let’s define two terms that appeared in the above paragraph. A Test Controller is the one that orchestrate all the load test. Is the one that send the tests to each node and wait the tests to run, collect the result and so on. A Test Agent is the node where the load tests are run. Test Agents represents the nodes that are used to run the tests.
The relationship between Test Controller and Test Agents is 1 to N. This means that we can have in a configuration one Test Controller that orchestrate the load tests, but we can have multiple Test Agents that will be used to run the tests.

It is important to know that one license of Visual Studio 2013 Ultimate it is enough for all this. You don’t need a different license for each Test Agent.
We can have different configurations. For example we can use only one machine that can play the role of Test Controller and Test Agent. Another option is to have Test Controller and Test Agents installed on different machines and Visual Studio installed only on the machine that is used by testers/developers to trigger the load test.
If you try to do something like this on Microsoft Azure don’t forget to open the port number 6901. This is the default port number uses by nodes for incoming calls. Don’t forget that all communication between the machine where Visual Studio is installed (machine from where the load test is triggered) and Test Agents is made through Test Controller.
Based on this information we should create VMs on Microsoft Azure that play the role of Test Controller and Test Agents. You can create and configure only one time and reuse them each time when you need it.
Useful links:

Comments

  1. Hello!
    What if you want to run more than 10K virtual users with IP Spoofing? How many Azure VMs should I create?

    Regards!

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP