Microsoft Workloads on AWS

Deciding where to host .NET applications on AWS

With Amazon Web Services (AWS), you have three main types of compute to choose from: instances, containers, and function-as-a-service. While all of them can be used to host .NET applications, choosing the right type can help you achieve the best possible architecture for your application. In this blog post, I review a few common types of .NET applications, then guide you through how to navigate towards the compute services that are optimal for these applications.

Compute services for .NET applications

.NET applications can run on any of the compute services on AWS. Here are some of the more common services used for running .NET applications on AWS:

  • Instances are virtual servers, allowing you to change their capabilities with a button or an API call. AWS offers Amazon Elastic Compute Cloud (EC2) instances, which come in different families and sizes.
  • Containers are a method of operating system virtualization that permits you to run an application and its dependencies in resource-isolated processes. Containers can be used if you need to control the installation, configuration, and management of your compute environment. AWS provides two container orchestration platforms: Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). AWS Fargate is the serverless compute engine that you can use to run containers without needing to manage the underlying servers.
  • Functions abstract the runtime environment from the code you want to apply. AWS Lambda lets you run code without the need for Amazon EC2 instances or containers.

Web applications and APIs

In broad terms, web applications and APIs respond to data view and mutation requests via the HTTP protocol. Some of their desired characteristics include:

  • Fast response time: End users expect quick responses, and even slight delays can significantly affect their experience and conversion rates, as explained in a Nielsen Norman Group research article.
  • Unpredictable load: It can be challenging to accurately predict request volume. During quiet times, you may want to scale in your compute resources to be more cost-efficient. You also need to be able to automatically scale out resources in response to increased demand.
  • High availability: Web and mobile applications and their backend APIs are the main digital storefronts for many companies. Even a tiny bit of downtime of customer- or internal-facing web applications or web APIs can negatively affect business.

Traditional on-premises deployments often include hosting web applications and APIs on virtual machines running Internet Information Services (IIS) on Windows Server. Using Amazon EC2 Windows instances can mimic a traditional on-premises setup. However, if you’re developing a new application or want more cloud computing benefits, keep reading.

Web applications typically consist of a mixture of both static and dynamic content. Static content includes assets such as HTML, CSS, images, and videos that do not change frequently. Dynamic content is personalized for each viewer of the web application and requires server-side processing, such as retrieval of data from a database. You need very little computational power to serve static content, so it is more optimized and cost-effective to host static content in storage services such as Amazon Simple Storage Service (Amazon S3). You can also set up Amazon CloudFront as the entry point for your users anywhere in the world to access the content with low latency. The architecture shown in Figure 1 is what I am going to use as baseline to discuss your compute options.

Figure 1: Amazon CloudFront with two origins serving both static and dynamic content

Figure 1: Amazon CloudFront with two origins serving both static and dynamic content

A serverless compute service, such as AWS Lambda, lets you meet the desired application behaviors mentioned previously without the overhead of managing the underlying infrastructure. Serverless compute comes with automatic scaling, built-in high availability, increased agility, and a pay-per-use billing model. Figure 2 shows an architecture with an ASP.NET web application or web API deployed into AWS Lambda:

Figure 2: Serverless web application architecture on AWS

Figure 2: Serverless web application architecture on AWS

This architecture requires .NET 6.0 or later. For new projects, this architecture can be a good fit. However, existing legacy applications codebase may first need to be converted to use a newer version of .NET. For this, you can use tools such as the AWS Toolkit for .NET Refactoring or the Porting Assistant for .NET to help convert your C# or VB.NET apps from .NET Framework 3.5 or later to modern .NET.

After porting the application codebase, it can still take a lot of time and effort to rearchitect. For such scenarios, containerizing these applications and running them on services like Amazon ECS or Amazon EKS is a viable approach. Containers are lightweight and generally start up faster than virtual machines. If you have existing Kubernetes skills within your organization, then opting for Amazon EKS can give you a smoother operational transition.

Both container services also support the AWS Fargate launch type. (Note that at the time of this writing, Amazon EKS does not support Windows containers with AWS Fargate.) With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. It removes the need for you to choose server types, when to scale your clusters, or optimize cluster packing.

Figure 3 below depicts deployment of an ASP.NET web application on AWS Fargate, load-balanced across two Availability Zones (AZ). A similar architecture can be used for web APIs, although web APIs typically use Amazon API Gateway in front of the application load balancer.

Figure 3: Load-balanced workloads on AWS Fargate

Figure 3: Load-balanced workloads on AWS Fargate

Background services

Unlike web applications and web APIs, background services, such as worker services, typically don’t have a user interface. They are “headless” and are often triggered at regular time intervals or in response to system events rather than user events. We can further divide worker services into two subcategories:

  1. Batch jobs are run at set intervals and require steady compute resources. They can take hours to complete. Examples include month-end reconciliation jobs or overnight batch processing.
  2. Background processors need to be available to perform asynchronous processing on-demand. They tend to complete their work in a much shorter timeframe, usually in seconds or minutes. Examples include report generation and queue processing.

Options for long-running batch jobs include Amazon EC2, Amazon ECS, or Amazon EKS (including the AWS Fargate launch type). For all of these options, fault-tolerant jobs can take advantage of Amazon EC2 Spot Instances, which offers up to 90% lower cost compared with on-demand pricing.

Migrating existing on-premises worker services and Windows services to the cloud provides an opportunity to modernize. For example, rather than implementing a Windows service, modern .NET allows worker service applications to run on Linux using the “systemd” service manager. For large-scale, container-based batch jobs, AWS Batch removes the need for you to configure and manage the compute infrastructure, while also simplifying coordination of jobs across multiple AZs within an AWS Region. However, not all workloads are suitable for AWS Batch. Visit AWS Batch dos and don’ts for more information about characteristics of suitable workloads.

For applications that are usually short-lived and need to be available on-demand, you can use serverless compute services for background processing. These background processes can be completed in a very short amount of time (for example, in seconds) and can take advantage of AWS Lambda, which can be scheduled using Amazon EventBridge Scheduler or polled from an Amazon SQS queue (Figure 4).

Figure 4: Using Lambda for short-lived background processors

Figure 4: Using Lambda for short-lived background processors

For longer-running background processes, one option is to host them with Amazon EC2, Amazon ECS, or Amazon EKS. However, when using those services, it is important to consider the uptime requirements and its cost implications. Amazon EC2 Spot instances can help you minimize costs, but be aware that spot pricing can fluctuate above the maximum price that you set when capacity demand is high, so it is best used for fault-tolerant workloads. Another option is to break down the background processor into multiple, smaller steps that can be performed by multiple Lambda functions using AWS Step Functions to orchestrate them, as shown in Figure 5:

Figure 5: Using Step Function standard workflow to chain together multiple lambda functions

Figure 5: Using Step Function standard workflow to chain together multiple lambda functions


Running .NET workloads on an Amazon EC2 Windows instance or in Windows containers on Amazon ECS or Amazon EKS can often be seen as the default approach. It is important to consider, however, application characteristics and other influencing factors, such as operations, performance efficiency, and the cost to arrive at the right service to host workloads.

Whether you are looking to build a new .NET application or migrate existing ones to the cloud, running .NET on Linux while taking advantage of serverless services — such as AWS Lambda, AWS Step Functions, and AWS Fargate — and integrating with other cloud-based services — such as Amazon EventBridge, Amazon S3, and Amazon SQS — tend to achieve the best results.

AWS has significantly more services, and more features within those services, than any other cloud provider, making it faster, easier, and more cost effective to move your existing applications to the cloud and build nearly anything you can imagine. Give your Microsoft applications the infrastructure they need to drive the business outcomes you want. Visit our .NET on AWS and AWS Database blogs for additional guidance and options for your Microsoft workloads. Contact us to start your migration and modernization journey today.

Adi Simon

Adi Simon

Adi Simon is a Partner Solutions Architect at AWS specializing in migrating, running and modernizing Microsoft workloads on AWS. He has extensive experience in software design and development, and have worked for Tier-1 IT consulting firm and leading financial institutions.