AWS Compute Blog

Introducing Blox from Amazon EC2 Container Service

by Chris Barclay | on | in Amazon ECS | | Comments

Today we are announcing Blox, a new open source project from the Amazon ECS team that enables users to build custom schedulers and other tooling on top of ECS. Our goal with Blox is to provide tools that simplify the creation of custom schedulers, dashboards and other extensions, so that customers can meet the needs of their specific use cases.

ECS recently announced the availability of an event stream that delivers ECS container instance and task state changes to Amazon CloudWatch Events. Customers that build scheduling workflows often need to consume the events generated in the ECS cluster, persist this state locally and operate on the local cluster state. Blox includes a cluster-state-service that provides this functionality and offers REST APIs on top of the local cluster state. Blox is targeted at developers that want to build custom schedulers or processes that need the current state of resources in the ECS cluster and developers that want to take action based on cluster events.

Blox also ships with a daemon-scheduler that supports launching one and only one copy of a task across all container instances in an ECS cluster. The scheduler monitors for new container instances joining the cluster and will place the task on them. Blox daemon-scheduler enables tasks like log agents and metric collection agents to run on ECS clusters.

We are excited to release Blox as open source software and plan to build an ecosystem of tools around ECS. If you are interested in using or contributing to Blox, come visit the Blox GitHub repository. We are tracking a number of feature proposals that we are evaluating for the roadmap. We invite you to come participate in our GitHub repository and help identify and prioritize improvements.

Deploying Blox

Blox Deployment on a Local Environment

Our recommended way for getting started with Blox is to deploy the framework on your local Docker installation. Blox offers a Docker Compose file that enables deployment in local environments. This allows you to get started with building custom schedulers using the cluster-state-service.

Here is the Blox architecture when run locally:

  • ECS pushes the cluster state changes as CloudWatch events.
  • CloudWatch events is configured to send to the SQS Queue.
  • Blox cluster-state-service consumes these events and recreates and stores the cluster state locally and offers REST APIs.
  • Blox daemon-scheduler uses the cluster-state-service APIs to track container instances in ECS cluster and launch tasks on them.

Step 1: Create SQS Queue and Configure CloudWatch Events to send ECS events to the SQS Queue

Blox depends on an ECS event stream that is delivered via CloudWatch events. In order to use Blox, you need to create an SQS queue and configure CloudWatch to deliver the ECS events to this SQS queue. Blox provides a pre-built AWS CloudFormation template that will deploy and configure the required Amazon AWS components. Once you have pulled the CloudFormation template from the Blox repository, run the following command using the AWS CLI:

$ aws --region  cloudformation create-stack --stack-name BloxLocal --template-body file://cloudformation_template.json

In a few minutes, the CloudFormation template will finish setting up the CloudWatch event and the SQS queue and you will be ready to deploy Blox.

Step 2: Launch Blox

Next, download the Docker Compose file from the Blox repo. Before launching Blox, you will first need to update docker-compose.yml with the following changes:

  • Update the AWS_REGION value with the region of your ECS and SQS resources.
  • Update the AWS_PROFILE value with your profile name in ~/.aws/credentials. You can skip this step if you are using the default profile.

After you have updated docker-compose.yml, you can use the following commands to launch the Blox containers on your local Docker environment.

# From the folder where you downloaded docker-compose.yml
$ docker-compose up –d
$ docker-compose ps

You will see output that shows the Blox cluster-state-service, daemon-scheduler, and etcd storage:

Name             Command                          State   Ports
-----------------------------------------------------------------------------
etcd_1        /usr/local/bin/etcd --data ...   Up      2379/tcp, 2380/tcp
scheduler_1   --bind 0.0.0.0:2000 --css- ...   Up      0.0.0.0:2000->2000/tcp
css_1         --bind 0.0.0.0:3000 --etcd ...   Up      3000/tcp

You have now completed the local installation of Blox. You can begin consuming the Scheduler API at http://localhost:2000/.

Using the daemon-scheduler

The daemon-scheduler uses the following concepts:

  • An environment represents the configuration for desired state of the tasks to be maintained. For daemon-scheduler, the environment indicates the task definition to launch in a specific cluster.
  • A deployment is the operation that brings the environment into existence. A deployment indicates to the scheduler that the desired configuration state in the environment should be established in the cluster.

Step 1: Create an ECS cluster

If you don’t have an ECS cluster, follow our Create Cluster guide.

Step 2: Register Task Definition

In order to launch tasks in ECS cluster, you need to register a task definition with ECS. Here is a Task definition you can use, if you don’t have one already.

$ cat > /tmp/nginx.json << EOF
 {
    "family": "nginx",
    "containerDefinitions": [{
        "name": "nginx",
        "image": "nginx",
        "cpu": 1024,
        "memory": 128
    } ]
}
EOF

$ aws ecs register-task-definition --cli-input-json file:///tmp/nginx.json

Query the ARN for the nginx task definition. You need this for the next step.

$ aws ecs list-task-definitions
{
   "taskDefinitionArns": [
        "arn:aws:ecs:us-west-2:<your-account-id>:task-definition/nginx:1"
    ]
}

Launch Daemon workloads using the daemon-scheduler

For this exercise, we will be using the demo-cli that Blox provides to interact with the scheduler. Please consult the Blox GitHub repository regarding the APIs that the daemon-scheduler exposes.

Step 3: Create an environment
Create an environment by replacing the cluster name and task definition ARN in the following command:

./blox-create-environment.py --environment TestEnvironment --cluster <MyClusterName> --task-definition <task-def-arn>

Sample output:

{
  "items": [
    {
      "deploymentToken": "17fb6b8b-abf3-4e7b-b9f4-fdb431d53887",
      "health": "healthy",
      "name": "releaseenvironment",
      "instanceGroup": {
        "cluster": "arn:aws:ecs:us-west-2:12345678:cluster/BloxTestCluster-1123-2"
      }
    }
  ]
}

Upon successful creation of the environment, the daemon-scheduler response will have a deploymentToken that will be used in our next step.

Step 4: Create a Deployment

In order to bring this environment into existence in your ECS cluster, you need to perform a deployment operation:

./blox-create-deployment.py --environment TestEnvironment --deployment-token <deploymentToken>

Creating a deployment will result in the scheduler launching the task definition attached to the environment across all the container instances in your cluster. You can now go to the ECS console and check out the tasks running on your container instances. You have now successfully used the daemon-scheduler to launch daemon workloads in your ECS cluster.

Blox Deployment on AWS

Blox can also be deployed on AWS. Use the Blox CloudFormation template to create:

  • A new ECS cluster with one container instance on which the Blox components are setup as an ECS service with cluster-state-service, daemon-scheduler and etcd containers making up a single task.
  • An Application Load Balancer is created in front of the daemon-scheduler endpoint.
  • An API Gateway is set up as the public facing frontend to Blox and provides the authentication mechanism. This API Gateway can be used to reach the scheduler and manage tasks on the ECS cluster.
  • A Lambda function that acts as a simple proxy enables the public facing API Gateway endpoint to forward requests onto the ALB listener in the VPC.

This Blox deployment can then be used to manage ECS clusters associated with the account.

See the instructions in our GitHub repo for the steps to configure this option.

Available now

Blox is available now. To learn more, see the Blox documentation in our GitHub repo.