Hosting Amazon Managed Workflows for Apache Airflow (MWAA) Local-runner on Amazon ECS Fargate for development and testing


Data scientists and engineers have made Apache Airflow a leading open-source tool to create data pipelines due to its active open-source community, familiar Python development as Directed Acyclic Graph (DAG) workflows, and an extensive library of pre-built integrations. Amazon Managed Workflows for Apache Airflow (MWAA) is a managed service for Apache Airflow that makes it easy to run Airflow on AWS without the operational burden of having to manage the underlying infrastructure.

While business needs demand scalability, availability, and security, Airflow development often doesn’t require full production-ready infrastructure. Many DAGs are written locally, and when doing so, developers need to be assured that these workflows function correctly when they’re deployed to their production environment. To that end, the MWAA team created an open-source local-runner that uses many of the same library versions and runtimes as MWAA in a container that can run in a local Docker instance, along with utilities that can test and package Python requirements.

There are times when a full MWAA environment isn’t required, but a local Docker container doesn’t have access to the AWS resources needed to properly develop and test end-to-end workflows. As such, the answer may be to run local-runner on a container on AWS, and by running on the same configuration as MWAA you can closely replicate your production MWAA environment in a light-weight development container. This post covers the topic of launching MWAA local-runner containers on Amazon Elastic Container Service (ECS) Fargate.


  • This tutorial assumes you have an existing Amazon MWAA environment and wish to create a development container with a similar configuration. If you don’t already have an MWAA environment, then you can follow the quick start documentation here to get started.
  • Docker on your local desktop.
  • AWS Command Line Interface (AWS CLI).
  • Terraform CLI (only if using Terraform).


  1. Clone the local-runner repository, set the environment variables, and build the image

We’ll start by pulling the latest Airflow version of the Amazon MWAA local-runner to our local machine.

Note: Replace <your_region> with your region and <airflow_version> with the version specified here.

git clone
cd aws-mwaa-local-runner

export ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export REGION=<your_region>
export AIRFLOW_VERSION=<airflow_version>
./mwaa-local-env build-image

Note: We’re expressly using the latest version of the Amazon MWAA local-runner as it supports the functionality needed for this tutorial.

2. Push your local-runner image to Amazon ECR

aws ecr get-login-password --region $REGION| docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.$

aws ecr create-repository --repository-name mwaa-local-runner --region $REGION 

export AIRFLOW_IMAGE=$(docker image ls | grep amazon/mwaa-local | grep $AIRFLOW_VERSION | awk '{ print $3 }') 

docker tag $AIRFLOW_IMAGE $

docker push $ACCOUNT_ID.dkr.ecr.$

Modify the MWAA execution role

For this example, we enable an existing MWAA role to work with Amazon ECS Fargate. As an alternative ,you may also create a new task execution role.

  1. From the Amazon MWAA console, select the link of the environment whose role you wish to use for your Amazon ECS Fargate local-runner instance.
  2. Scroll down to Permissions and select the link to open the Execution role.
  3. Select the Trust relationships tab.
  4. Choose Edit trust policy.
  5. Under Statement -> Principal -> Service add
    "Version": "2012-10-17",
    "Statement": [
                  "Effect": "Allow",
                  "Principal": {
                         "Service": [
                  "Action": "sts:AssumeRole"

6. Select Update policy.

7. Choose the Permissions tab.

8. Select the link to the MWAA-Execution-Policy.

9. Choose Edit policy.

10. Choose the JSON tab.

11. In the Statement section describing logs permissions, under Resource, add arn:aws:logs:us-east-1:012345678910:log-group:/ecs/mwaa-local-runner-task-definition:*, where 012345678910 is replaced with your account number and us-east-1 is replaced with your region.

            "Effect": "Allow",
            "Action": [
            "Resource": [

12. We also want to add permissions that allow us to execute commands on the container and pull the image from Amazon ECR.

            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Effect": "Allow", 
            "Action": [ 
             "Resource": "*" 

Note: Ensure that your private subnets have access to AWS Systems Manager (SSM) via Internet Gateway or PrivateLink to “” in order to enable command execution

13. Choose Review policy.

14. Choose Save changes.

The creation of the Aurora Postgress Serverless instance and Amazon ECS resources can either be done using AWS CloudFormation or Terraform, as per the following sections. To create the resources required, clone the aws-samples/amazon-mwaa-samples repository.

Take note of the variables from the existing MWAA environment needed to create the Amazon ECS environment (i.e., security groups, subnet IDs, Virtual Private Cloud (VPC) ID, and execution role).

    $ export MWAAENV=test-MwaaEnvironment
    $ aws mwaa get-environment --name $MWAAENV --query 'Environment.NetworkConfiguration' --region $REGION
          "SecurityGroupIds": [
          "SubnetIds": [

     $ aws mwaa get-environment --name $MWAAENV --query 'Environment.ExecutionRoleArn'

AWS CloudFormation

  1. Navigate to the ECS CloudFormation directory:
$ cd amazon-mwaa-examples/usecases/local-runner-on-ecs-fargate/cloudformation
  1. Update the AWS CloudFormation template input parameters file parameter-values.json in your favorite code editor (e.g., vscode).
    "Parameters": {
        "ECSClusterName": "mwaa-local-runner-cluster",
        "VpcId": "your-mwaa-vpc-id",
        "ECRImageURI" : "",
        "SecurityGroups" : "sg-security-group-id",
        "PrivateSubnetIds" : "subnet-mwaapvtsubnetid1,subnet-mwaapvtsubnetid2",
        "PublicSubnetIds" : "subnet-mwaapublicsubnetid1,subnet-mwaapublicsubnetid2",
        "S3BucketURI" : "s3://your-mwaa-bucket-path",
        "ECSTaskExecutionRoleArn": "arn:aws:iam::123456789:role/service-role/mwaaExecutionRoleName",
        "AssignPublicIpToTask" : "yes"
  • [Optional] Additional AWS CloudFormation template input parameter values can be overridden in either template directly (mwaa-ecs-on-fargate.yml) or supplied in input parameter file in step # 2.
  1. Deploy the AWS CloudFormation template.
$ aws cloudformation deploy \ 
--stack-name mwaa-ecs-sandbox \
--region $REGION
--template-file mwaa-on-ecs-fargate.yml \
--parameter-overrides file://parameter-values.json \
--capabilities CAPABILITY_IAM

Where …

  • Stack-name – AWS CloudFormation Stack name is e.g., mwaa-ecs-sandbox
  • Region – where you want to install the stack. It can be sourced from env variable or replaced with the value e.g., ap-east-2, us-west-2
  • Template-file – CF template name in subfolder mwaa-on-ecs-fargate.yml
  • Parameter – overrides is updated input parameter file with your environment values in step 2

It takes time (up to 40 minutes) to create required Amazon ECS and Amazon Relational Database Service (RDS) resources before showing output on successful completion as …

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - mwaa-ecs-sandbox
  1. To test validate the deployed environment, lets get the output parameters AWS CloudFormation template generated including Load Balancer with AWS CloudFormation describe command as:
$ aws cloudformation describe-stacks --stack-name mwaa-ecs-sandbox --query 'Stacks[0].Outputs[*]'
        "OutputKey": "LoadBalancerURL",
        "OutputValue": "",
        "Description": "Load Balancer URL"
        "OutputKey": "DBClusterEP",
        "OutputValue": "",
        "Description": "RDS Cluster end point"
  1. To test validate the local runner on Amazon ECS Fargate, go to Access Airflow Interface Step below after the Terraform steps.


  1. Navigate to the ECS Terraform directory:
   $ cd amazon-mwaa-examples/usecases/local-runner-on-ecs-fargate/terraform/ecs
  1. Create the tfvars file that contains all the required parameters. Replace all the parameters with the required parameters for your configuration.
$ cat <<EOT>> terraform.tfvars
assign_public_ip_to_task    = true 
ecs_task_execution_role_arn = "arn:aws:iam::123456789:role/ecsTaskExecutionRole" 
elb_subnets                 = ["subnet-b06911ed", "subnet-f3bf01dd"]
image_uri                   = ""               
mwaa_subnet_ids             = ["subnet-b06911ed", "subnet-f3bf01dd"]
region                      = "us-east-1"
s3_dags_path                = "s3://airflow-mwaa-test/DAG/"
s3_plugins_path             = "s3://airflow-mwaa-test/"
s3_requirements_path        = "s3://airflow-mwaa-test/requirements.txt" 
vpc_id                      = "vpc-e4678d9f" 
vpc_security_group_ids      = ["sg-ad76c8e5"]
  1. Initialize the Terraform modules and plan the environment to create the RDS Aurora Serverless database. The subnet IDs and security group IDs of your environment can be retrieved from the previous step.


  • Make use of the existing MWAA Environment subnets, VPC, and security groups.
  • The security group also needs to allow traffic to itself.
  • The security group needs allow traffic from your local machine on port 80 to access the loadbalancer URL.
    $ terraform init

    $ terraform plan
Once the plan has succeeded, create the resources using the variables used in the previous step.

    $ terraform apply -auto-approve


   database_name = "AirflowMetadata"
   db_passsword = <sensitive>
   loadbalancer_url = ""
   rds_endpoint = ""

Note: you may face the error create: ExpiredToken: The security token included in the request is expired │ status code: 403. If you do face this error, untaint the RDS resource and re-apply.

Access the Airflow user interface

  1. Direct your browser to the Application Load Balancer (ALB) URL from the AWS Cloudformation/Terraform output, being sure to preface with http (

Note: If you chose an internal ALB, you’ll need to be on your VPC private subnet via VPN or similar.

  1. When presented with the Airflow user interface, provide the username admin and the default password specified as test1234.
  1. You now are in a standard Airflow deployment that closely resembles the configuration of MWAA using local-runner.

Updating the environment

  1. When you stop and restart the Amazon ECS Fargate task, the dags, plugins, and requirements will be re-initialized. This can be done through a forced update:
$ aws ecs update-service \ 
  --service mwaa-local-runner-service \ 
  --cluster mwaa-local-runner-cluster \ 
  --region $REGION \ 
  1. If you wish to do so without restarting the task, you may run the command directly via execute-command:
    • If this is your first time running execute-command then we need to update the service to allow this functionality:
$ aws ecs update-service \ 
  --service mwaa-local-runner-service \ 
  --cluster mwaa-local-runner-cluster \ 
  --region $REGION \ 
  --enable-execute-command \ 
    • When the AWS Fargate task resumes availability, we need to know the task ID:
$ aws ecs list-tasks \ 
  --cluster mwaa-local-runner-cluster \ 
  --region $REGION
    • This returns a JSON string that contains an ARN with the unique task ID in the format:

In this case 11aa22bb33cc44dd55ee66ff77889900, which we’ll use in the next command:

$ aws ecs execute-command \ 
  --region $REGION \ 
  --cluster mwaa-local-runner-cluster \ 
  --task 11aa22bb33cc44dd55ee66ff77889900 \ 
  --command "/bin/bash" \ 

Note: You may need to install Session Manager in order to execute commands via the AWS CLI.

    • At this point you can run any activities you wish, such as execute the s3 sync command to update your dags:
$ aws s3 sync —exact-timestamp —delete $S3_DAGS_PATH /usr/local/airflow/dags

Or view your scheduler logs:

$ cd /usr/local/airflow/logs/scheduler/latest;cat *
    • When complete, type exit to return to your terminal.


Cleaning up

  • If no longer needed, be sure to delete your AWS Fargate cluster, task definitions, ALB, Amazon ECR repository, Aurora RDS instance, and any other items you do not wish to retain.
    • With AWS Cloudformation, delete the stack.
$ aws cloudformation delete-stack --stack-name mwaa-ecs-sandbox
    • With terraform, run
$ terraform destroy

Important: Terminating resources that aren’t actively being used reduces costs and is a best practice. Not terminating your resources can result in additional charges.


In this post, we showed you how to configure Amazon MWAA open-source local-runner container image on Amazon ECS Fargate containers to provide a development and testing environment, using Amazon Aurora Serverless v2 as the database backend and execute-command on the AWS Fargate task to interact with the system.

To learn more about Amazon MWAA visit the Amazon MWAA documentation. For more blog posts about Amazon MWAA, please visit the Amazon MWAA resources page.

John Jackson

John Jackson

John has over 20 years of software experience as a developer, systems architect, and product manager in both startups and large corporations and is the AWS Principal Product Manager responsible for Amazon Managed Workflows for Apache Airflow (MWAA).

Anil Raut

Anil Raut

Anil Raut is a Sr. Technical Account Manager at AWS helping enterprise customers teams in building and ensuring highly available and resilient workloads with proactive measures and AWS Cloud best practices. Prior to AWS, he helped large customers modernizing and optimizing their hybrid container workloads (RH OpenShift, VMware Tanzu, Microsoft .NET) and migrate to AWS Cloud container platform including ECS, EKS.

Nataizya Sikasote

Nataizya Sikasote

Nataizya Sikasote is a Sr. Cloud Support Engineer, Containers, at AWS where he helps customers who are working with the AWS container services troubleshoot and achieve their desired outcomes. Nataizya is an accredited Subject Matter Expert in AWS ECS and aims to share his expertise in containers with as many customers as he can. Outside of containers, Nataizya enjoys scripting in Python and building solutions.