Microsoft Workloads on AWS

Building Windows containers with AWS CodePipeline on AWS GovCloud (US)

Many AWS GovCloud (US) customers and their partners use AWS CodePipeline and AWS CodeBuild to build Continuous Integration/Continuous Deployment (CI/CD) pipelines on AWS. Building on AWS GovCloud (US), however, introduces a few restrictions, not present in other AWS Regions, when implementing pipelines for Windows container applications.

In this blog post, I will explain what these restrictions are and how to overcome them, and then walk you through a technical solution that you can further extend.

AWS CodeBuild Windows-specific restrictions

CodeBuild has supported Windows builds since 2018 through Windows Server container images; nonetheless, this support is limited to a specific number of Regions. Currently, the AWS Regions that support Windows Server Docker images are:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • Europe (Ireland)

For Regions that do not support Windows Server container images, there is an alternative method for building Windows containers using Amazon Elastic Compute Cloud (Amazon EC2) Windows Server instances orchestrated by CodePipeline Custom Actions, AWS Lambda Functions, and AWS System Manager. However, Custom Actions are not supported in AWS GovCloud (US) Regions.

I will walk you through a solution to build Windows containers using Amazon EC2 Windows Server instances orchestrated by AWS Systems Manager Automation, two services that are supported on AWS GovCloud (US) Regions.

Solution overview

The following diagram shows the steps and actors involved during the execution of the solution:

Sequence diagram for the solution

Figure 1: Sequence diagram for the solution

Following is the description of the previous diagram:

  1. A new version of the source code is updated to Amazon Simple Storage Service (Amazon S3).
  2. CodePipeline reacts to this update and triggers a new pipeline execution.
  3. CodePipeline starts a new CodeBuild run.
  4. CodeBuild starts the execution of a Systems Manager Automation runbook orchestrating the build jobs and agents.
  5. Systems Manager selects a build agent from the existing ones, or creates one if none is available.
  6. Systems Manager runs the build commands in the selected build agent, this includes creating the container and pushing it to an Amazon Elastic Container Registry (Amazon ECR) repository.
  7. Systems Manager stops the build agent if set to do so.
  8. CodePipeline continues with the following stages defined until completion.

The following diagram shows the components involved in the solution:

Architectural diagram for the solution

Figure 2: Architectural diagram for the solution

  1. CodePipeline pipeline used to orchestrate the whole CI/CD pipeline.
  2. Amazon S3 bucket where the source code will be stored and changes will be posted.
  3. CodeBuild project in charge of starting the execution of the Systems Manager Automation used to manage build agents and run build jobs.
  4. Systems Manager Automation runbook in charge of orchestrating the build agents and starting the build job.
  5. Systems Manager Command document in charge of defining the actual commands to run as part of the build job.
  6. Amazon Virtual Private Cloud (Amazon VPC) in which to create the Amazon EC2 instance used for the build process.
  7. Group of one or more Amazon EC2 instances used as build agents.
  8. Amazon CloudWatch log group where the logs from the CodeBuild run and the build agent will be collected.
  9. Amazon ECR repository to store the built Windows container image.

This blog post is accompanied by a GitHub repository where you will find an AWS CloudFormation template you can use to deploy the entire solution. I will be referencing that template throughout the rest of the post.

The building process

In this solution, there are three main components of the build process to focus our attention:

  1. Definition and customization of build agents.
  2. Orchestration of agent pools and build jobs.
  3. Definition and customization of build commands.

Build agents

The term “build agent” refers to Amazon EC2 instances that run build commands sent through Systems Manager. Using Amazon EC2 instances as build agents brings the following benefits to the whole CI/CD process:

  1. Ability to use specialized resources like graphic cards, NVMe storage, etc., which are only available on Amazon EC2 instances.
  2. Faster queue to execution times when posting a new build, thanks to an already-running Amazon EC2 instance.
  3. Lower build time for containerized application by caching the base images used and layers from previous builds.
  4. Reuse of the same Amazon EC2 instances to run build jobs for multiple pipelines, even simultaneously.
  5. Customizability to install and configure dependencies and support software used by the build processes.

There are two main ways in which you can customize a build agent in this solution:

  1. Customizing an existing Amazon EC2 instance and adding it to the agent pool.
  2. Creating an Amazon Machine Image (AMI) and configuring the Systems Manager Automation runbook to use it when creating new build agents.

Customizing build agents through existing Amazon EC2 instances

Simply launch a Windows Server Amazon EC2 instance, configure it in a way that is appropriate for your environment, then add the tag aws-blog:codebuild:agent-pool with the value defined on the Build agent pool CloudFormation stack parameter.

Note: You can replace the aws-blog prefix of these tags with something more meaningful for you. To do this, simply replace that text in the accompanying CloudFormation template.

Customizing build agents through custom AMIs

You have some options on how to create an AMI. One option is to create it from an existing EC2 instance and another is to use EC2 Image Builder.

Using EC2 Image Builder provides greater flexibility, especially when it comes time to update some dependencies or the base OS. Read this blog post that talks about how to Speed up Windows container launch times with EC2 Image Builder and image cache strategy for more information.

Once the new AMI is ready, replace the value in the Build agent AMI ID parameter of the CloudFormation stack with the AMI ID, and make sure to terminate all existing agents so that Systems Manager can create a new one.

Configuring network for build agents

Regardless of how the build agents were created, you must make sure there is connectivity between the agents and AWS Systems Manager, Amazon CloudWatch, Amazon S3, and Amazon ECR. You can follow any of the following configurations to achieve this:

  1. Host the build agent in a public subnet with a Public IP assigned to it.
  2. Host the build agent in a private subnet with access to a NAT Gateway deployed in a public subnet.
  3. Host the build agent in a private subnet with access to a Site-to-Site VPN with outbound access to the internet.
  4. Host the build agent in a private subnet without outbound internet access but with VPC Endpoints configured for AWS Systems Manager, and all other necessary AWS services.

Agent pools and build jobs

Agent pools are nothing more than a set of Amazon EC2 instances that can be randomly selected to run a build job. The main component in charge of orchestrating build jobs and agent pools is the Systems Manager Automation runbook named WindowsBuildAgentRunbook, which is inside the accompanying CloudFormation template. Following is an excerpt of its definition:

    Type: AWS::SSM::Document
      DocumentType: Automation
        schemaVersion: "0.3"
        description: "Orchestrates Windows Server build agents and start build jobs triggered by AWS CodeBuild." 
            type: String
            default: !Ref BuildAgentAmiId
            description: The Windows Server AMI to use for the new build agents.
            type: String
            description: (Optional) The EC2 instance type for new build agents.
            default: !Ref BuildAgentInstanceType
            type: String
            description: (Required) The IAM instance profile to attach to new build agents.
            default: !Ref WindowsBuildAgentInstanceProfile
            type: Integer
            description: (Required) Desired volume size (GiB) for new build agents.
            default: !Ref BuildAgentVolumeSize
            type: String
            description: (Required) Desired volume type for new build agents.
            default: !Ref BuildAgentVolumeType
            type: String
            description: (Required) S3 Bucket to store build artifacts.
            default: !Ref ArtifactStoreBucket
            type: String
            description: (Required) Build artifact key.
            default: !Ref EcrRepo
            description: (Required) ECR Repository name
            type: "String"
            type: String
            description: (Optional) Tag for the container image.
            default: latest
            type: String
            description: (Optional) CloudWatch Log Group Name used to store build agent's logs.
            default: !Ref CloudWatchLogGroupName
            type: String
            description: (Required) Region in which the AWS resources live.
            default: !Ref AWS::Region   
            default: !Sub
              - 'C:\\jobs\${BuildAgentJobId}'
              - { BuildAgentJobId: !Select [2, !Split ['/', !Ref AWS::StackId]] }
            description: Working directory for the command
            type: String
            type: String
            description: (Required) Defines if the agent is going to be stopped after the build process.
            default: !Ref StopAfterBuild
        - name: validateExistingInstance  
        - name: chooseIfStartOrCreateInstance 
        - name: createNewInstance   
        - name: describeInstances    
        - name: selectInstance   
        - name: startInstance   
        - name: waitForInstanceToBeReady
        - name: waitForSSMAgentOnline  
        - name: downloadSource     
        - name: runBuildCommand  
        - name: chooseIfStopInstance  
        - name: stopInstance

Most of the parameters received by the Automation runbook are set by the parameters of the CloudFormation stack. The following parameters are set at runtime when CodeBuild starts the automation execution:

  • ImageTag
  • PipelineBucketName
  • SourceArtifactS3Path

This Automation runbook follows the workflow process below in order to obtain a valid build agent and run a build job:

Workflow diagram for build orchestration

Figure 3: Workflow diagram for build orchestration

Following is the description of the previous diagram:

  1. Systems Manager queries for Amazon EC2 instances whose aws-blog:codebuild:agent-pool tag contains the value defined in the Build agent pool parameter in the CloudFormation stack, and that are in a Pending, Running, or Stopped state.
  2. If the query does not return any results, then Systems Manager launches a new Amazon EC2 instance using the parameters defined in the New build agent configuration Parameter groups in the CloudFormation stack.
  3. Once launched, the new instance will have the following tags assigned to it:
    1. aws-blog:cloudformation:stack-id: ID of the CloudFormation stack.
    2. aws-blog:cloudformation:stack-name: Name of the CloudFormation stack.
    3. aws-blog:codebuild:agent-pool: Value of the Build agent pool name parameter.
  4. Systems Manager will query Amazon EC2 with the same criteria as step 1.
  5. Systems Manager will run a PowerShell script to randomly select a single Instance ID from the results of step 3.
  6. Before continuing to the next step, Systems Manager will wait until the following are true for the selected instance:
    1. Its state is Running.
    2. Its status checks are successful.
    3. Its Systems Manager agent is online.
  7. Systems Manager will run a PowerShell command on the selected instance to:
    1. Clear source and temporal files from previous jobs.
    2. Download the source zip file from the defined Amazon S3 bucket.
    3. Extract the downloaded zip file contents.
  8. Systems Manager will run the WindowsBuildDocument Command document on the selected instance to run the build job.
  9. Once the build job Command document has finished, Systems Manager will check if the Stop agent after build? parameter was set to true and will stop the build agent, otherwise it will finish the automation execution.

The previously mentioned Step 4 enables support for multiple agents within a single agent pool. When Systems Manager finds more than one instance in the same agent pool, it will pick one randomly so that the load can be distributed more evenly across the pool. The agents in the pool could have been created manually by this Systems Manager Automation runbook or created by another process.

Build commands

The build commands are separated into an independent Systems Manager Command document in order for them to be maintained independently from the orchestration logic. The Command document can be found in the accompanying CloudFormation template under the name WindowsBuildDocument, shown here:

    Type: AWS::SSM::Document
      DocumentType: Command
        schemaVersion: "2.2"
        description: "Runs commands to build and push Windows container applications images"
            description: Working directory for the command
            type: String
            description: ECR Repo Name
            type: String
            type: String
            description: (Optional) Tag for container image.
            default: latest
        - action: 'aws:runPowerShellScript'
          name: buildCommands
            - |
              cd {{WorkingDirectory}}
              $tmpFolder = $(Get-Location).Path
              $srcFolder = "$tmpFolder\src"
            - |
              Write-Host "*****Building Docker image...*****"
              $instanceInfo = (Invoke-RestMethod -Method Get -Uri
              $repoUri = $instanceInfo.accountId  + '.dkr.ecr.' + $instanceInfo.region + '{{EcrRepoName}}'
              $latestUri = "$($repoUri):latest"
              $hasCustomTag = $False
              $buildArgs = @("-t", $latestUri)
              $customTag = "{{ImageTag}}"

              if ($customTag -and $($customTag -ne "latest")) {
                $hasCustomTag = $True
                $uriWithTag = "$($repoUri):$customTag"
                $buildArgs += @("-t", $uriWithTag)

              Write-Host "Docker Build Args are '$buildArgs'"

              cd $srcFolder
              docker build $buildArgs $srcFolder\

              if ($LASTEXITCODE -ne 0) {
                throw ("'docker build $buildArgs $srcFolder\' execution failed with exit code $LASTEXITCODE.")
            - |
              Write-Host "*****Pushing Docker image to ECR...*****"
              Invoke-Expression –Command (Get-ECRLoginCommand –Region $region).Command
              docker push $repoUri --all-tags

              if ($LASTEXITCODE -ne 0) {
                throw "'docker push $repoUri --all-tags' execution failed with exit code $LASTEXITCODE."

All parameters in this document are set at runtime when the Automation runbook starts this command.

This Command document performs the following actions:

  1. Positions itself in the directory where the source code was extracted.
  2. Runs the docker build command for the Dockerfile defined in the source code.
  3. Tags the built Docker image according to the target Amazon ECR repository.
  4. Logs into the target Amazon ECR repository.
  5. Pushes the built Docker image to the target Amazon ECR repository.

Note: The commands contained in the Command document shown previously were adapted to work with the sample app accompanying this blog post.

To modify these steps to accommodate your own code, you can do it either in the CloudFormation template or through the AWS Management Console, creating a new version of the existing Command document and setting it as the default version.

Note: If you choose to manually edit the Systems Manager Document and later update your CloudFormation Template, you will lose the changes made. I highly recommend that you make sure the content in your CloudFormation Template matches what you have modified on AWS before updating your stack.

CI/CD pipeline

Now it’s time to integrate all of the components and create a single unified CI/CD pipeline. You will use a combination of CodePipeline and CodeBuild to achieve this.

AWS CodePipeline is a continuous delivery service that you can use to model, visualize, and automate the steps required to release your software. In this solution, CodePipeline is used to:

  1. Automate the release process by defining repeatable actions like reacting after a change in the source code in S3, or starting the process of building the Windows container and deploying the container image into a service of your choosing.
  2. Enable real-time visualization of the status of the release process across the different services used.

AWS CodeBuild is a fully managed build service in the cloud that reduces the need to provision, manage, and scale your own build servers. In this solution, CodeBuild is used as a bridge to connect the build orchestration process defined in System Manager with CodePipeline.

A fully configured pipeline is already defined in the accompanying CloudFormation. In the following section, I am going to explore only the most important details of each stage.

Source stage of the pipeline

The Source Stage is configured with a single Amazon S3 action, configured as follows:

  1. Defines the Amazon S3 bucket where the source code object is located.
  2. Defines the key of the source code object to monitor.
  3. Defines the detection mechanism using AWS CodePipeline via periodic check for changes in the Amazon S3 object.
  4. Defines a variable namespace that further actions can use to access information about the triggering Amazon S3 object.
  5. Defines a name for the output artifact so that they can be used by further actions or stages.

CodePipeline Source stage configuration

Figure 4: CodePipeline Source stage configuration

Build stage of the pipeline

The Build Stage is configured with a single AWS CodeBuild action, configured as follows:

  1. Uses the output artifacts from the Build stage as input.
  2. Defines the CodeBuild project to run.
  3. Sets an Environment Variable called IMAGE_TAG with the Version ID from the Amazon S3 object that triggered the build. This variable is used by the CodeBuild project and is passed into System Manager to tag the resulting Windows Container image.
  4. Defines a name for the output artifact so that they can be used by further actions or stages.

CodePipeline Build stage configuration

Figure 5: CodePipeline Build stage configuration

CodeBuild Project

CodeBuild is used in this solution as a bridge to connect and control the execution of the Automation runbook used for orchestrating build jobs and build agents and the CI/CD pipeline. When a new build run is started, CodeBuild runs the following actions:

  1. Extracts the path to the source code Amazon S3 bucket and object.
  2. Starts a Systems Manager Automation runbook, sending the following parameters:
    1. Tag to use for the Windows container image.
    2. Amazon S3 bucket where the source code resides.
    3. Relative path to the source code Amazon S3 object.
  3. Enters a loop to validate the automation job is not in progress anymore.

The CodeBuild project is defined inside the accompanying CloudFormation under the name CodeBuildProject. The following is an excerpt of its definition:

    Type: AWS::CodeBuild::Project
        Type: CODEPIPELINE
        InsecureSsl: false
        BuildSpec: !Sub |
          version: 0.2
              DOCUMENT_NAME: ${WindowsBuildAgentRunbook}
              ECR_REPO_URI: ${AWS::AccountId}.dkr.ecr.${AWS::Region}${EcrRepo}
              LOG_GROUP_NAME: ${CloudWatchLogGroupName}
                - BUILD_ID=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-8)
                - SRC_ARTIFACT_S3_PATH=${!SRC_ARTIFACT_FULL_S3_PATH#*"/"}
                - SRC_ARTIFACT_BUCKET_NAME=$(echo $SRC_ARTIFACT_FULL_S3_PATH | cut -c 1-$((${!#SRC_ARTIFACT_FULL_S3_PATH} - ${!#SRC_ARTIFACT_S3_PATH} - 1)))
                - echo SRC_ARTIFACT_S3_PATH=$SRC_ARTIFACT_S3_PATH
                - echo Starting automation execution using document $DOCUMENT_NAME...
                - echo Logs are stored under CloudWatch Log Group $LOG_GROUP_NAME...
                - EXECUTION_ID=$(aws ssm start-automation-execution --document-name $DOCUMENT_NAME --parameters ImageTag=$IMAGE_TAG,PipelineBucketName=$SRC_ARTIFACT_BUCKET_NAME,SourceArtifactS3Path=$SRC_ARTIFACT_S3_PATH --output text)
                - echo Running execution $EXECUTION_ID...
                - COMMAND="aws ssm describe-automation-executions --filters Key=ExecutionId,Values=$EXECUTION_ID"
                - STATUS=$($COMMAND | jq -r ".AutomationExecutionMetadataList[0].AutomationExecutionStatus")
                - | 
                  while [ $STATUS = "InProgress" ]; 
                    do sleep 3; 
                    STATUS=$($COMMAND | jq -r ".AutomationExecutionMetadataList[0].AutomationExecutionStatus"); 
                - |
                  if [ $STATUS = "Success" ]; 
                  then echo Automation execution succeeded.; 
                  else echo Automation execution failed. Please check CloudWatch log for details.; 
                    ERROR_MSG=$($COMMAND | jq -r ".AutomationExecutionMetadataList[0].FailureMessage"); 
                    echo SSM Failure Message Follows.; 
                    echo $ERROR_MSG; 
                    exit 1; 
                - echo Writing image definition file...
                - printf '[{"imageUri":"%s"}]' $ECR_REPO_URI:$IMAGE_TAG > imagedefinitions.json
                - cat imagedefinitions.json
              - imagedefinitions.json
      ServiceRole: !GetAtt CodeBuildServiceRole.Arn
      TimeoutInMinutes: 60
      QueuedTimeoutInMinutes: 480

Putting it all to the test

To test the entire solution, create a new CloudFormation stack using the accompanying CloudFormation template found on GitHub. That template defines five parameters to configure the CI/CD pipeline and eight parameters used to configure new agents created by the Automation Runbook. Following are the parameter definitions:

Parameter name Definition
Source code S3 object key Key of the Amazon S3 object containing the source code to build.
ECR repository name Name of the target Amazon ECR repository.
Build agent pool name Name of the agent pool to use.
Stop build agent after job? Determines if build agents are stopped after a build job has finished.
CloudWatch log group name Name of the Amazon CloudWatch log group where to collect logs from CodeBuild and the build agents.
Build agent AMI ID AMI ID or SSM parameter expression used for new build agents.
Build agent EC2 instance type Type of EC2 instance used for new build agents. For more information, see
Build agent volume size Size in GBs for the EBS volume attached to new build agents.
Build agent volume type Type of EBS volume attached to new build agents. For more information, see
Build agent name Value for the ‘Name’ tag assigned to new build agents.
Build agent key pair name Name of the key pair used for new build agents. This is optional.
Build agent Subnet ID Subnet ID in which new build agents will run.
Build agent Security Group ID(s) List of Security Group IDs to attach to new build agents.

Table 1: CloudFormation template parameters 

Note: The Build agent Subnet ID and Build agent Security Group ID(s) parameters need to be related to the same VPC; otherwise, there will be an error when trying to create a new build agent.

Once the CloudFormation stack is deployed successfully, in order to start the build process, upload the source code you want to build as a zip file into the newly created Amazon S3 bucket.

I have put some sample code in the GitHub repository that you can use to test the solution. Download and zip the contents of the folder /src/sample-app/, naming It the same as the value of the Source code S3 object key parameter in the CloudFormation template (by default, and upload it to the Amazon S3 bucket.

After you upload the file into your Amazon S3 bucket, you can navigate to the CodePipeline Console to see your pipeline running. If it is not running, try waiting a few seconds or try releasing a new change manually by clicking on the Create pipeline button on the top right.

List of pipelines in CodePipeline console

Figure 6: List of pipelines in CodePipeline console

Selecting the name of the pipeline will display the progress for each of its stages. Once the Build stage is in progress, you can select Details to see the output of the CodeBuild project.

Status of CodePipeline execution

Figure 7: Status of CodePipeline execution

On the CodeBuild project run console, when you see on the Build logs the text Running command COMMAND="aws ssm describe-automation-executions --filters Key=ExecutionId,Values=$EXECUTION_ID" it means that the Systems Manager Automation runbook is already running.

CodeBuild log output segment

Figure 8: CodeBuild log output segment

To see the logs from System Manager, navigate to the AWS Systems Manager Automation Console. You should see an automation in progress. To see more details, select the Execution ID you want to explore.

List of Systems Manager Automation executions

Figure 9: List of Systems Manager Automation executions

In this new page, you can see which step the Automation runbook is currently on.

Status of step execution for a Systems Manager Automation

Figure 10: Status of step execution for a Systems Manager Automation

If you select one of the completed steps, you will be able to see the logs of the execution. If the step status is in progress, you will not be able to see the logs until it finishes.

Details of a step in Systems Manager Automation

Figure 11: Details of a step in Systems Manager Automation

To see the logs of a step that is in progress, navigate to the Amazon CloudWatch Log Groups Console, and select the log group created as part of the CloudFormation stack.

Details of a CloudWatch log group

Figure 12: Details of a  CloudWatch log group

You can also explore the most recent Log stream to see the logs as they are being captured.

Log events of a CloudWatch log stream

Figure 13: Log events of a CloudWatch log stream

Where to go from here

The accompanying CloudFormation template provides a foundational solution for building Windows Containers on AWS GovCloud (US) using CodePipeline and CodeBuild. You can modify or build upon the solution to cover additional use cases, such as:

  • Amazon S3 was selected for this example for its versatility in supporting any type of code from any external version control system, but you can use AWS CodeCommit instead.
  • Currently the source code downloaded into the build agent is deleted before a new build job. This is done in order to avoid corrupting the source code with out-of-date files. Sometimes this behavior is beneficial to speed up the building of large solutions. You can add this functionality if you consider it important.
  • This solution does not run unit testing or static code analysis on the code. You can easily include it as part of the build process by adding the necessary commands on the Systems Manager Command document used to run the build job.
  • If you choose not to stop the instances after each build, you can deploy a solution to automatically stop your build agents on a schedule, in order to save on costs.
  • The CodePipeline pipeline only has two stages. You can add additional stages for things like approvals or Continuous Deployment (CD) to services such as Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS).

Limits and costs

Deploying this solution into an AWS Account will incur the following passive and active costs:

Passive costs (charged regardless of use)

  • Amazon S3 storage.
  • Amazon Elastic Block Storage (EBS) volumes.
  • Amazon EBS backups (if any).
  • Amazon EC2 instances in “running” state.
  • Amazon CloudWatch Logs (if exceeds the 5-GB limit for the free tier).
  • CodePipeline pipeline (if exceeds the 1-pipeline limit for the free tier).

Active costs (charged only when the solution is being used)

  • CodeBuild build minutes (if exceeds the 100-minute limit for the free tier).
  • Systems Manager Automation step count (if exceeds the 100,000-step limit for the free tier).
  • Systems Manager Automation step duration (if exceeds the 5,000-second limit for the free tier).


If you are running the previous steps as a part of a workshop or testing, then you may delete the resources to avoid incurring any further charges. Most resources are deployed as part of the CloudFormation stack, so go to the CloudFormation Console, select the specific stack, and click delete to remove the stack.

Before deleting the CloudFormation stack you must manually delete the following:

  1. Amazon S3 Source bucket files.
  2. Amazon S3 Artifacts bucket files.
  3. Amazon ECR repository images.
  4. Amazon EC2 instances used as build agents. (If you want to keep the instance alive, you must remove the IAM role it was assigned).


In this blog post, I showcased how Systems Manager can be used to create and manage build jobs for Windows container applications, and how this helps you leverage CodeBuild in AWS GovCloud (US). I explored the different components comprising the solution, walked you through its configuration, execution, and monitoring. Finally, I laid down some ideas on what you can further improve and build on top of the solution.

If you have any other ideas on how to improve this solution and would like to share it with me and the audience, write them down in the comment section below.

AWS can help you assess how your company can get the most out of cloud. Join the millions of AWS customers that trust us to migrate and modernize their most important applications in the cloud. To learn more on modernizing Windows Server or SQL Server, visit Windows on AWSContact us to start your migration journey today.