AWS Developer Tools Blog

Using Amazon Corretto (OpenJDK) for lean, fast, and efficient AWS Lambda Applications

Using Amazon Corretto (OpenJDK) for lean, fast, and efficient AWS Lambda

By Guest Blogger Adam Bien

In this post, I will discuss how you can launch large, monolithic applications on top of AWS Lambda, and I’ll show that they perform well and are cost effective. You’ll learn that the same application you develop for Lambda can be run locally or deployed across your favorite AWS container service, without modifications. Furthermore, I’ll introduce you to running Amazon Corretto on ARM64 using Lambda’s Graviton2-based offering. Corretto, Amazon’s version of OpenJDK, is my go-to option, especially for ARM64, because it lets you run a Java Lambda on Graviton processors efficiently and cost-effectively. I’ll be using it to demonstrate how you can use it with Quarkus to build microservices quickly and at low cost.

Built-in functionality over external dependencies

Java’s dynamic dependency mechanism was introduced in 2009 via “JSR 330:Dependency Injection for Java” and is often used with MicroProfile or Jakarta EE.

For those unfamiliar with dependency injection, consider it like updating your classes’ instance variables dynamically via a service. This is accomplished through annotations that you place in your code, e.g., @Inject. Therefore, rather than instantiating an instance variable yourself in your class, you would ask a service to do it for you. This makes your class more maintainable. The instance that you inject into your class from the external service is loosely coupled with your client class, which lets you maintain your code more easily. Moreover, you can more easily develop test cases, because you can inject variables into your classes based on configuration files without recompiling. In addition, since the classes are loosely coupled, they become easier to test in isolation. A MicroProfile runtime combined with Java’s built-in class libraries lets you make external dependencies optional and increases productivity with fast iterations and short deployment times. Using dependency injection also lets you focus on developing your app. And this happens without having to worry about the boilerplate code that you would ordinarily need to implement many times, repetitively, with no differentiated and additive benefit.

However, even a small Java microservice can become a large Lambda. The following code illustrates a typical Create, Read, Update, Delete (CRUD) operation in Java that’s comprised of a few injected classes:

import javax.json.*;

public class CRUDResource {

  CRUDStore store;

  public JsonObject fetch(@PathParam("id") String id){
  var result =;

  public JsonArray findAll(){}

  public void delete(@PathParam("id") String id){}

  public void deleteAll() {}}

  public Response upsert(@PathParam("id") String id, JsonObject input){}

  public Response patch(@PathParam("id") String id, JsonObject input) {}

  public Response insert(JsonObject input) {}

A Java microservice is usually composed of multiple REST endpoints. On the one hand, an application containing multiple HTTP endpoints in a single Lambda would be considered quite large by Lambda standards. On the other hand, a cohesive, monolithic Java microservice reduces development and deployment complexity. In my experience, I’ve seen that monolithic Java microservices were more productive and have been easily maintained. The challenges of distributed computing don’t apply to a monolith because all of the invocations within the monolith are local. The question then becomes: is it viable to take a Java microservice and deploy it on top of AWS Lambda?

Quarkus – A Next Generation Runtime

Quarkus is a framework that simultaneously supports MicroProfile APIs and the Amazon API Gateway.

This means that Quarkus can interact with the AWS REST API, WebSocket API, or HTTP API. For example, an API Gateway/Elastic Load Balancer converts HTTP requests to HTTP Events and forwards them to the Lambda runtime. Quarkus consumes the HTTP request with a generic Lambda implementation: ability to map HTTP requests for large numbers of endpoints – with each handling various HTTP methods – to the reactive event model within Quarkus is a productive way to build micro-service-like Lambda applications. Deploying this on Lambda means that you can have a monolithic web application which is capable of supporting many different kinds of HTTP requests that are spun up to handle traffic, on-demand.

Let’s discuss performance. The duration of cold and warm must start to be minimized to meet cost and customer experience expectations. Quarkus’ build time optimizations can reduce startup time and decrease the reflection needs during dependency injection. Execution times are improved by requesting that Quarkus examine the metadata at build time and replace expensive reflection with straightforward, generated bytecode. This is done instead of relying on the JVM to run sophisticated class loading mechanisms and dynamic invocations at runtime.

Quarkus offers productivity gains through its use of Jakarta Context Dependency Injection (CDI), Jakarta JSON Binding (JSON-B), Jakarta JSON Processing (JSON-P), JAX-RS, Jakarta Bean Validation, Configuration for MicroProfile CDI, JSON-B, JSON-P, JAX-RS, and Bean Validation APIs. These combine to make developing HTTP services more convenient.

JAX-RS maps HTTP requests to Java methods, JSON-B binds JSON to classes, JSON-P parses JSON to Map-like structure, and CDI lets you make your code leaner and more testable. Moreover, Bean Validation lets you verify the correctness of your input parameters, and MicroProfile Config enables you to make Lambda environment entries directly injectable into Java fields.

Quarkus integrates these APIs as extensions and performs the optimizations mentioned above. You can use them without significant performance impacts.

From a JAR to

Let’s deploy a JAX-RS endpoint:

public class GreetingResource {

  Greeter greeter;

  public String hello() {
    return this.greeter.greetings();

  public void hello(String message) {

with injected Java class containing a MicroProfile Config property:

public class Greeter {

  static System.Logger LOG = System.getLogger(Greeter.class.getName());

  @ConfigProperty(defaultValue = "hello, quarkus on AWS", name="message")
  String message;

  public String greetings() {
    return this.message;

  public void greetings(String message) {
    LOG.log(INFO, "received: " + message);

Note: The code is available from GitHub: here.

In the above code, you’re requesting that MicroProfile Config inject the String message field’s value from a well-defined sequence of sources: META-INF/, environment entries, and system properties. Lambda’s configuration specified via the cdk or AWS Management Console overrides the default values specified in the configuration file.

Infrastructure as Code (IaC) with Java

Java is not only a great language with which to build Lambda functions, but also well suited for IaC automation with AWS Cloud Development Kit (cdk) v2. The following LambdaStack class comprising the Lambda and HTTP API Gateway constructs, and the corresponding integration, are packaged and deployed as an AWS CloudFormation stack.

public class LambdaStack extends Stack {

  static Map<String, String> configuration = Map.of("message", "hello, quarkus / large AWS Lambda");
  static String functionName = "aws_Large";
  static String lambdaHandler = "";
  static int memory = 1024; //~0.5 vCPU
  static int timeout = 10;

  public LambdaStack(Construct scope, String id) {
  super(scope, id);
  var function = createFunction(functionName, lambdaHandler, configuration, memory, timeout);

  void integrateWithHTTPApiGateway(Function function){
    var lambdaIntegration = HttpLambdaIntegration.Builder.create("HttpApiGatewayIntegration",function)
    var httpApiGateway = HttpApi.Builder.create(this, "HttpApiGatewayIntegration")
    CfnOutput.Builder.create(this, "HttpApiGatewayUrlOutput").value(httpApiGateway.getUrl()).build();

Function createFunction(String functionName,String functionHandler, Map<String,String> configuration, int memory, int timeout) {
  return Function.Builder.create(this, functionName)

CDK example code is available from GitHub: here

Under Lambda, the number of vCPU resources provided is tied to the amount of memory that you select. Choosing a configuration that uses 1 GB of RAM will provide us with approximately half of a vCPU, which is sufficient for our needs. We configure the need for Corretto 11 with the method runtime(Runtime.JAVA_11) and the target CPU architecture with runtime(Runtime.JAVA_11) and the target CPU architecture with architecture(Architecture.ARM_64), which will use AWS Graviton2 processors.

The Map passed as configuration to the Builder in the method environment (configuration) is injectable via the MicroProfile configuration.

The “executable” main method in the CDKApp class instantiates the LambdaStack:


public class CDKApp {

public static void main(final String[] args) {
  var app = new App();
  var appName = "oversized";
  Tags.of(app).add("project", "MicroProfile with Quarkus on AWS Lambda");
  Tags.of(app).add("application", appName);

  new LambdaStack(app, appName);

Our example is structured as a “self-provisioned service”, and it ships with two directories: lambda and cdk.

The maven command “mvn clean package” executed from the Lambda directory starts the Quarkus build and creates “” After is available, you execute from the “cdk” directory mvn clean package&& cdk deploy to provision the AWS resources and deploy the Lambda to AWS. The overall Lambda package size is 13.7 MB.

large.HttpApiGatewayUrlOutput = https://{GENERATED_ID}

The performance and costs of large Lambdas on Corretto (OpenJDK 11)

Our Lambda is accessible via HTTP: https://{GENERATED_ID} The very first request is a cold start. The first invocation: curlhttps://{GENERATED_ID} takes three seconds for our MicroProfile application. Then, internal OpenJDK optimizations kick in and further optimize the runtime performance.

Even though our Lambda is a full-stack MicroProfile Java application with dependency injection, JSON-B serialization, MicroProfile Config, and running on a 0.5 vCPU, a “warm” request takes only 5-6 ms.

Shows CloudWatch report with Billed duration of 536 ms on the first invocation and between 4.17 ms and 7.12 ms on the next six invocations. Also shown is that each invocation used the same Max 1024 MB of Heap and used 157 MB.

One reason that you might look into using a full-stack framework such as Quarkus on top of Lambda is the potential cost savings. For example, after entering our Lambda settings: Region: EU (Frankfurt), Architecture: Arm, Number of requests: five per second, average duration: 6 ms, amount of memory allocated: 1024 MB, and amount of ephemeral storage allocated: 512 MB (minimum/unchanged) into the AWS pricing calculator, the resulting monthly Lambda costs, Without Free Tier, are: $3.68.

Calculator showing that the cost per month with 5 requests per second with a 6 millisecond duration of a request, and 1024 MB of RAM and 512 MB of epheral storage will cots $21.80 per month

More realistically, a Lambda will use external AWS Services, e.g., Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), Amazon Kinesis, or Amazon MSK . If you were to assume the calls to be made would be synchronous and increased the average Lambda instance duration from 6 ms to ~100 ms, then the resulting monthly costs would be approximately $20.15. Feel free to do your own calculations and try it for yourself.

Hybrid deployments and local testing

Although our simplistic application is packaged as Lambda (, it’s still a regular Quarkus/MicroProfile application, and it can be started locally with: mvn compilequarkus:dev. After less than a second (0.861s), or after a local “cold start,” the application becomes available under http://localhost:8080/hello.

Quarkus also supports container (Docker) builds out-of-the-box. The same Quarkus/MicroProfile application can be deployed as a Lambda function and to all AWS services supporting containers, such as, Amazon Elastic Container Service (Amazon ECS)/Amazon Elastic Kubernetes Service (Amazon EKS) Fargate, AWS Elastic Beanstalk, AWS App Runner, or AWS Lightsail . In addition to container images, Quarkus builds also produces executable JARs which are runnable on bare Amazon Elastic Compute Cloud (Amazon EC2) instances.

Lambda Functions are not application servers

Although we can run the same Java bytecode without modification on multiple AWS Cloud services, Lambda behaves differently from an application running on an application server in a container.

Quarkus in a container runs continuously, and multiple threads serve the requests. A Lambda is single-threaded. Several independent Lambda instances and JVMs respond to parallel requests.

Containers are restarted less frequently, and a Lambda is potentially re-initialized at every request.

MicroProfile and CDI support instance lifecycles/contexts such as, RequestScoped, SessionScoped, or ApplicationScoped. In the Lambda case, only the ApplicationScoped context is applicable.

Given the stateless and short-lived nature of Lambda, some MicroProfile APIs (e.g., startup hooks, MicroProfile metrics, timers, schedules, JDBC connection pools) must be extracted from Java code to AWS services (e.g. Amazon EventBridge, Amazon CloudWatch metrics, or Amazon RDS proxy).


Together with Corretto’s JDK, Lambda is an excellent MicroProfile deployment platform. MicroProfile and Lambda let you focus on business value rather than the underlying infrastructure. MicroProfile provides implementation-agnostic APIs, Quarkus implements MicroProfile APIs which improve startup and runtime behavior, and Corretto continuously optimizes the performance of “warm” functions. Furthermore, Corretto is optimized to run on ARM64 architectures and Graviton2 processors more cost-efficiently. To see me present the ideas from this article in a recorded video format, see “Microprofile on Quarkus” and “MicroProfile on Quarkus as AWS Lambda deployed with AWS CDK“.


Adam Bien

Adam Bien is a software architect and developer (with usually 20/80 distribution) in Java (SE / EE / Jakarta EE / MicroProfile) and Web (ES 6+, Web Components, Web Standards “no frameworks”) projects. Often he’s starting as an architect and after a few days finds himself developing PoCs, performing code reviews, or helping the teams developing critical parts of the system.
In the recent years he has helped many clients to migrate Java EE / Jakarta EE / MicroProfile applications to serverless architectures on AWS. Such projects often started as code and architecture reviews and ended with a pragmatic cloud migration. He speaks regularly at conferences, but doesn’t consider himself a professional speaker, nor a writer. He’s just really enjoying writing code and killing the bloat. …and Java is perfect for that.