windows-iis-net-framework

Considerations when using IIS and .NET Framework

Engineering

11/27/2023 1:30 PM

.NET Microsoft Tutorial Development

Seeing errors? Try implementing these solutions

Recently, we had a customer file a support ticket regarding some errors they saw with IIS (Microsoft’s Internet Information Service) and .NET Framework.

If you have a similar environment, this blog contains some tips to help you smoothly run the 51Degrees service.

Setting the scene

In this case, the customer used a Windows Server 2012 R2 running IIS 8.5. The deployed application was an ASP.NET MVC app built against .NET Framework 4.8 SDK.

Additionally, we built the 51Degrees Pipeline from the configuration file and integrated it an IIS module. It runs before the request processing reaches the app code and processes evidence from the request. It also gets the device data from the cloud service to enrich the request. For more information, see our Cloud Framework-Web example.

With this extra context, let's describe some of the errors you may encounter and what you can do to solve them. Alternatively, if you’d like a little extra help implementing 51Degrees within your environment, you can purchase a priority support plan.

DLL and Global Assembly Cache troubles

The customer was moving from Version 3 of the Pipeline API to Version 4 when they encountered an error: the 51Degrees Pipeline was failing to build from the configuration file.

First, the configuration file needed to be in the appropriate format, and the customer needed to download the latest NuGet packages. However, the stack trace still showed a call from the V3 assembly.

It was apparent that there was some DLL from V3 still present in the Global Assembly Cache (GAC). We advised the customer to remove that DLL and make sure that only V4 DLLs are present.

Requests are blocked

Looking into your stack trace, you may see the error where the Pipeline failed to initialize.

On initialization, the CloudRequestEngine makes two requests to these Cloud Service APIs:

  • /api/v4/AccessibleProperties gets the Accessible properties for a given Resource Key. You can also provide License Keys.
  • /api/v4/EvidenceKeys/{resource} gets the evidence keys in the cloud pipeline evidence key filter. If you provide a Resource Key, then the service will only return the evidence keys related to properties in the Key.

We spotted that when these requests fail, the Pipeline fails to initialize.

As with anything hosted online, no resource has 100% uptime. Rarely, the 51Degrees cloud service can become unavailable. To help with this, we changed the CloudRequestEngine design. EvidenceKeys and AccessibleProperties are lazily initialized and are only requested from the server at the point of first use.

This allows the Pipeline to initialize and handle the exceptions during evidence processing where it is controlled by the SuppressProcessExceptions configuration option.

IIS application pool stopped under load

There are a few reasons why the IIS app pool could be stopped, among which are misconfigurations or a high number of errors in a given time frame.

We set up our own load testing environment, replicating the Windows Server 2012 R2 with IIS 8.5 running on a Parallels virtual machine. We deployed the Cloud Framework-Web example and patched the httpClient created so that it sends the requests through a local Charles proxy:

    
        var proxy = new WebProxy 
        { 
            Address = new Uri("http://127.0.0.1:8888"), 
            BypassProxyOnLocal = false, 
            UseDefaultCredentials = false, 
        }; 
        
        // Create a client handler that uses the proxy 
        var httpClientHandler = new HttpClientHandler 
        { 
            Proxy = proxy, 
        }; 
        
        // Disable SSL verification 
        httpClientHandler.ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator; 
        
        // Finally, create the HTTP client object 
        var client = new HttpClient(handler: httpClientHandler, disposeHandler: true);
    

This ensures the requests to the server succeed and measures the timing.

We used locust.io as a load testing tool and tested two endpoints. Here is a simple locustfile.py:

    
        from locust import HttpUser, task, between 
        class FrameworkWeb(HttpUser): 
            # wait_time = between(1, 5) 
        
            @task
            def root(self): 
                response = self.client.get( 
                    "/Framework-Web", 
                    headers={ 
                        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36", 
                        "Cookie": "51D_ProfileIds=; 51D_ScreenPixelsHeight=1120; 51D_ScreenPixelsWidth=1792; 51D_PixelRatio=2", 
                    }, 
                ) 
        
                self.check_response(response) 
        
            @task 
            def json(self): 
                response = self.client.post( 
                    "/51dpipeline/json", 
                    { 
                        "51D_ProfileIds": "51D_ScreenPixelsHeight: 1120", 
                        "51D_ScreenPixelsWidth": 1792, 
                        "51D_PixelRatio": 2, 
                        "session-id": "1edf79e6-95a1-4c9f-a575-5cee6a6f7343", 
                        "sequence": 1, 
                    }, 
                ) 
        
                self.check_response(response) 
        
            def check_response(self, response): 
                if response.status_code != 200 and response.status_code != 301: 
                    print(response.status_code) 
                    print(response.text) 
    

In general, the response time was increasing with the number of concurrent users. The number of the app pool worker processes and threads within each process can limit the requests per second.

On the virtual machine where we ran this, we observed that with one process we were able to achieve around 32 requests per second. The average response time (where the full response from IIS contains a call to cloud.51degrees.com and processes the result) was around 2 seconds on our virtual machine.

The system adds the requests that are not serviced immediately into the Request Queue, which has a default size of 1000. Once the Request Queue is full, the server starts responding with 503 Service Unavailable and not handling any additional requests.

So, there are two main levers to control the concurrency of the IIS: number of worker processes in the application pool and the size of the requests queue. Both settings are available in the Advanced Settings of the application pool.

iis-app-pool
You can increase Queue Length to 65535, and also increase Maximum Worker Processes

The response time of course becomes CPU-bound after a certain limit, so adding more processes does not help. The requests remain queued, or they even appear processed in parallel – they must wait for the CPU-time as they are processed concurrently by the same CPU cores.

If in doubt, check your environment

If you’ve implemented the above and are still receiving error messages, check that your requests are not blocked by something within your environment. This could be your firewall not letting any requests to cloud.51degrees.com get through, it could be an error elsewhere in your code, or you don’t have the cyphers necessary to negotiate a handshake with cloud.51degrees.com.

If you need help to identify errors in your code, you can raise the query on the relevant 51Degrees GitHub repository. Or if you require priority support and systems integration help, you can purchase a support plan, and our team will take a look at your code.