AWS Field Notes

Optimize your Java application for Amazon ECS with Quarkus

By Sascha Möllering, Manager, Solutions Architecture

In this blog post, I show you an interesting approach to implement a Java-based application and compile it to a native image using Quarkus. This native image is the main application which is containerized and runs in an Amazon Elastic Container Service (Amazon ECS) cluster with AWS Fargate launch type.

Quarkus is a Supersonic Subatomic Java framework that uses OpenJDK HotSpot as well as GraalVM and over fifty different libraries like RESTEasy, Vertx, Hibernate, and Netty. In a previous blog post, I demonstrated how GraalVM can be used to optimize the size of Docker images. GraalVM is an open source, high-performance polyglot virtual machine from Oracle. I use it to compile native images ahead of time to improve startup performance, and reduce the memory consumption and file size of Java Virtual Machine (JVM)-based applications. The framework that allows ahead-of-time-compilation (AOT) is called Substrate.

Application Architecture

The GitHub repository containing the demo application can be found here.

Our application is a simple REST-based Create Read Update Delete (CRUD) service that implements basic user management functionalities. All data is persisted in an Amazon DynamoDB table. Quarkus offers an extension for Amazon DynamoDB that is based on AWS SDK for Java V2. This Quarkus extension supports two different programming models: blocking access and asynchronous programming. For local development, DynamoDB Local is also supported. DynamoDB Local is the downloadable version of DynamoDB that lets you write and test applications without accessing the DynamoDB service. Instead, the database is self-contained on your computer. When you are ready to deploy your application in production, you can make a few minor changes to the code so that it uses the DynamoDB service.

The REST-functionality is located in the class UserResource which uses the JAX-RS implementation RESTEasy. This class invokes the UserService that implements the functionalities to access a DynamoDB table with the AWS SDK for Java. All user-related information is stored in a Plain Old Java Object (POJO) called User.

Building the Application

To create a Docker container image that can be used in the task definition of my ECS cluster, only three simple step are necessary: building the application, creating the Docker Container Image, and pushing the created image to my Docker image registry.

To build the application, I used Maven with different profiles. The first profile (default profile) uses a build to create an uber JAR – a self-contained application with all dependencies. This is very useful if you want to run local tests with your application, because the build time is much shorter compared to the native-image build. When you run the package command, it also execute all tests, which means you need DynamoDB Local running on your workstation.

$ docker run -p 8000:8000 amazon/dynamodb-local -jar DynamoDBLocal.jar -inMemory -sharedDb

$ mvn package

The second profile uses GraalVM to compile the application into a native image. In this case, you use the native image as base for a Docker container. The Dockerfile can be found under src/main/docker/Dockerfile.native and uses a build-pattern called multi-stage build.

$ mvn package -Pnative -Dquarkus.native.container-build=true

An interesting aspect of multi-stage builds is that you can use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base image, and begins a new stage of the build. You can pick the necessary files and copy them from one stage to another, thereby limiting the number of files you have to copy. Use this feature to build your application in one stage and copy your compiled artifact and additional files to your target image. In this case, you use ubi-quarkus-native-image:19.2.1 as base image and copy the necessary TLS-files (SunEC library and the certificates) and point your application to the necessary files with JVM properties.

FROM as nativebuilder
RUN mkdir -p /tmp/ssl-libs/lib \
  && cp /opt/graalvm/jre/lib/security/cacerts /tmp/ssl-libs \
  && cp /opt/graalvm/jre/lib/amd64/ /tmp/ssl-libs/lib/

WORKDIR /work/
COPY target/*-runner /work/application
COPY --from=nativebuilder /tmp/ssl-libs/ /work/
RUN chmod 775 /work
CMD ["./application", "", "-Djava.library.path=/work/lib", ""]

In the second and third step, I have to build and push the Docker image to a Docker registry of my choice which is straightforward:

$ docker build -f src/main/docker/Dockerfile.native -t

$ docker push <repo/image:tag>

Setting up the Infrastructure

You’ve compiled the application to a native-image and have built a Docker image. Now, you set up the basic infrastructure consisting of an Amazon VPC, an Amazon ECS cluster with AWS Fargate launch type, an Amazon DynamoDB table, and an Application Load Balancer.

Infrastructure architecture

Figure 1: Architecture of the infrastructure

Codifying your infrastructure allows you to treat your infrastructure just as code. In this case, you use the AWS Cloud Development Kit (AWS CDK), an open source software development framework, to model and provision your cloud application resources using familiar programming languages. The code for the CDK application can be found under ecs_cdk/lib/ecs_cdk-stack.ts. Set up the infrastructure in the AWS Region us-east-1:

$ npm install -g aws-cdk // Install the CDK

$ cd ecs_cdk

$ npm install // retrieves dependencies for the CDK stack

$ npm run build // compiles the TypeScript files to JavaScript

$ cdk deploy  // Deploys the CloudFormation stack

The output of the AWS CloudFormation stack is the load balancer’s DNS record. The heart of our infrastructure is an Amazon ECS cluster with AWS Fargate launch type, which is set up as follows:

const cluster = new ecs.Cluster(this, "quarkus-demo-cluster", {
      vpc: vpc

    const logging = new ecs.AwsLogDriver({
      streamPrefix: "quarkus-demo"

    const taskRole = new iam.Role(this, 'quarkus-demo-taskRole', {
      roleName: 'quarkus-demo-taskRole',
      assumedBy: new iam.ServicePrincipal('')

    const taskDef = new ecs.FargateTaskDefinition(this, "quarkus-demo-taskdef", {
      taskRole: taskRole

    const container = taskDef.addContainer('quarkus-demo-web', {
      image: ecs.ContainerImage.fromRegistry("<repo/image:tag>"),
      memoryLimitMiB: 256,
      cpu: 256,

      containerPort: 8080,
      hostPort: 8080,
      protocol: ecs.Protocol.TCP

    const fargateService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, "quarkus-demo-service", {
      cluster: cluster,
      taskDefinition: taskDef,
      publicLoadBalancer: true,
      desiredCount: 3,
      listenerPort: 8080

This part of the CDK application creates an Amazon ECS cluster and adds a Fargate service based on our application. To have better visibility, the application creates a dedicated log group with the prefix quarkus-demo to send log streams to. Since you need access to a DynamoDB table, the application sets up an AWS Identity and Access Management (IAM) task role for our ECS tasks with the service principal The next step is to create a Fargate task definition which is using the role you previously created. In the task definition, you use the Docker image you created and define CPU, memory, and port mapping configuration, because you have to expose port 8080. The service definition creates a public application load balancer and three task instances

Cleaning up

After you are finished, you can easily destroy all of these resources with a single command to save costs.

$ cdk destroy


In this post, I described how Java applications can be implemented using Quarkus, compiled to a native-image, and ran using Amazon ECS with AWS Fargate. I also showed how AWS CDK can be used to set up the basic infrastructure. I hope I’ve given you some ideas on how you can optimize your existing Java application to reduce startup time and memory consumption. Feel free to submit enhancements to the sample template in the source repository.

About the Author

Sascha Möllering has been working for more than four years as a Solutions Architect and Solutions Architect Manager at Amazon Web Services EMEA in the German branch. He shares his expertise with a focus on Automation, Infrastructure as Code, Distributed Computing, Containers, and JVM in regular contributions to various IT magazines and blogs. He can be reached at

Sascha Moellering

Sascha Moellering

Sascha Moellering is a Senior Solution Architect. He is primarily interested in automation, containers, serverless, and the JVM. He can be reached at