AWS Open Source Blog

Getting started with Spring Boot on AWS: Part 2

This is a guest post from Björn Wilmsmann, Philip Riecks, and Tom Hombergs, authors of the upcoming book Stratospheric: From Zero to Production with Spring Boot and AWS.

In part 1 of this two-part Spring Boot tutorial, we provided a brief introduction to Spring Cloud for AWS and covered how to display content of an S3 bucket with Thymeleaf in our demo application. In part 2, we’ll show how to subscribe to an SQS queue and externalize the configuration of our application using the Parameter Store of AWS Systems Manager.

Feature 2: Subscribing to an SQS queue

To consume SQS messages, our application has to poll the AWS SQS service frequently to check whether or not messages are available. We don’t want to write the polling algorithm on our own as this would involve threading and failure handling.

As Spring Boot developers, we are used to annotating methods for consuming messages or events, for example like this: @EventListener(ApplicationReadyEvent.class).

Spring Cloud for AWS brings the polling logic for SQS and SNS. We annotate a method with @SqsListener and can subscribe to a queue. Because all SQS messages are strings, we also get support for serializing and converting the message to Java objects.

Our S3 bucket is configured to send an event to our SQS queue whenever someone uploads a new file to our bucket. This way, we can develop a synchronization mechanism and update our simple file viewer in the browser using WebSockets.

@Component
public class QueueListener {

   private static final Logger LOGGER = LoggerFactory.getLogger(QueueListener.class);

   public QueueListener() {
   }

   @SqsListener(value = "${custom.sqs-queue-name}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
   public void onS3UploadEvent(S3EventNotification event) {
       LOGGER.info("Incoming S3EventNoticiation: " + event.toJson());
      
       // e.g. now use WebSockets to update the view of all active users
   }
}

The SQS message payload is then serialized and passed to our method. We are also able to define the deletion policy and could manually acknowledge or reject messages. For this example, we are using ON_SUCCESS, which acknowledges the message if no exception is thrown.

The AWS SDK for Java also comes with classes for AWS events. This allows us to have type-safe access to the event using the S3EventNotification class.

Because we are consuming a AWS event message and because these messages do not contain a mime-type header, we must slightly adjust our QueueMessageHandlerFactory and instruct Jackson (used to serialize the JSON messages) to serialize messages without a strict content type match:

@Bean
public QueueMessageHandlerFactory queueMessageHandlerFactory() {
   QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
   MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
   messageConverter.setStrictContentTypeMatch(false);
   factory.setArgumentResolvers(Collections.singletonList(new PayloadMethodArgumentResolver(messageConverter)));
   return factory;
}

If you have used Spring Boot in the past, you might be familiar with different *Templates (for example, JDBCTemplate or JmsTemplate) as a general abstraction over a certain technology following the Template method pattern.

To programmatically send and receive both SNS and SQS messages two templates are available. All we have to do is to create them:

@Configuration
public class MessagingConfig {

   @Bean
   public QueueMessagingTemplate queueMessagingTemplate(AmazonSQSAsync amazonSQSAsync) {
       return new QueueMessagingTemplate(amazonSQSAsync);
   }

   @Bean
   public NotificationMessagingTemplate notificationMessagingTemplate(AmazonSNS amazonSNS) {
       return new NotificationMessagingTemplate(amazonSNS);
   }

   // QueueMessageHandlerFactory Bean
}

Next, we can inject both beans into our queue listener and further process the incoming message to send a message to another queue or a topic to inform multiple subscribers:

@Component
public class QueueListener {

   private static final Logger LOGGER = LoggerFactory.getLogger(QueueListener.class);

   private final QueueMessagingTemplate queueMessagingTemplate;
   private final NotificationMessagingTemplate notificationMessagingTemplate;

   public QueueListener(QueueMessagingTemplate queueMessagingTemplate,
                        NotificationMessagingTemplate notificationMessagingTemplate) {
       this.queueMessagingTemplate = queueMessagingTemplate;
       this.notificationMessagingTemplate = notificationMessagingTemplate;
   }

   @SqsListener(value = "${custom.sqs-queue-name}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
   public void onS3UploadEvent(S3EventNotification event) {
       LOGGER.info("Incoming S3EventNoticiation: " + event.toJson());

       String bucket = event.getRecords().get(0).getS3().getBucket().getName();
       String key = event.getRecords().get(0).getS3().getObject().getKey();

       Message<String> payload = MessageBuilder
               .withPayload("New upload happened: " + bucket + "/" + key)
               .build();

       this.queueMessagingTemplate.convertAndSend("queueNameToNotify", payload);
       this.notificationMessagingTemplate.convertAndSend("topicNameToNotify", payload);
   }
}

Feature 3: Externalizing the application configuration

We don’t want to store sensitive values such as the database password or API credentials in our application.yml (central configuration file of a Spring Boot application). Additionally, we usually deploy our application with different profiles (for example, production or development); therefore, need to change the configuration based on the profile.

Here, the AWS Systems Manager Parameter Store can help us with storing both plain strings and secure strings containing sensitive information. Once we start our application, it will reach out to the Parameter Store and fetch its configuration based on the profile that is active. Let’s see how we can achieve this.

To start, we need a name for our application, let’s call it demo-application. This is important, as we have to follow a convention for the naming of configuration parameters to make it work without further tweaks (convention over configuration in action).

A configuration parameter must be named after this pattern:

/config/<name_of_the_app>_<profile>/<config_value>

Valid configuration keys are the following:

  • /config/demo-application_production/spring.datasource.password
  • /config/demo-application/spring.datasource.password

    (if we don’t activate any profile and use Spring’s default, we can omit the profile indicator)

  • /config/demo-application_development/custom.clients.weather-api.secret-key

For this demo, let’s extract the name of the S3 bucket and the SQS queue name, as these change depending on the application’s environment.

Technically, the following will happen now: Whenever we request a configuration value inside our application, Spring uses a set of PropertySourceLocators to “search” for the definition of a config value. By default, Spring uses a set of different locators to retrieve the value (for example, environment variables, the Servlet context, JVM arguments, and so on).

Using Spring Cloud AWS and Spring Boot’s auto-configuration, we now plug in an AwsParameterStorePropertySourceLocator into this lookup mechanism. Our application will now also take the AWS Parameter Store into consideration when resolving configuration values.

If the default naming scheme for the configuration parameters does not fit your use case, you can define your own.

Packaging the application

We can now build our application with Gradle using ./gradlew assemble. As a result, we get a FAT JAR that we can run with a single command:

java -jar build/libs/getting-started-with-spring-boot-on-aws-final.jar 

For creating the Docker image, we can make use of Amazon Corretto JDK as a base image. What’s left is to copy our JAR file to the container and specify the ENTRYPOINT:

FROM amazoncorretto:11-alpine
COPY build/libs/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

Once you have a containerized version of your application, you can push the Docker image to Amazon Elastic Container Registry (Amazon ECR) and deploy it with Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS).

You can find the source code with detailed instructions on how to build and run this application on GitHub.

Conclusion

Spring Cloud AWS makes AWS a first-class citizen cloud provider for Spring Boot applications. We showed how easy it is to integrate core AWS services like SQS, S3, or the Parameter Store. There are way more features of Spring Cloud AWS to explore: Amazon Relational Database (Amazon RDS) support (e.g., read/write replicas), Amazon Simple Email Service (Amazon SES) integration, Amazon Simple Notification Service (Amazon SNS), and CloudFormation support.

Make sure to follow the latest development of the library at GitHub. Maciej Walkowiak and Eddú Meléndez Gonzales are doing a great job, and the roadmap for version 3.0 makes us look forward to more.

In our upcoming book Stratospheric: From Zero to Production with Spring Boot and AWS, we will develop a more advanced application and guide you through all required steps to get your Spring Boot application running on AWS. Not only will you learn how to develop a state-of-the-art Spring Boot application that utilizes multiple AWS services, but you will also learn how to get your application production-ready. Join our mailing list for more information about this book.

Feature image via Pixabay.

Björn Wilmsmann width=”150″ height=”150″ />

Björn Wilmsmann

Björn Wilmsmann is an independent IT consultant who helps companies transform their business into a digital business. A longtime software entrepreneur, he’s interested in web apps and SaaS products. He designs and develops business solutions and enterprise applications for his clients. Apart from helping companies in matters of software quality and improving the availability of and access to information through APIs, Björn provides hands-on training in technologies such as Angular and Spring Boot.

Philip Riecks width=”150″ height=”150″ />

Philip Riecks

Under the slogan Testing Java Applications Made Simple, Philip provides recipes and tips & tricks to accelerate your testing success on both his blog and on YouTube. He is an independent IT consultant from Berlin and is working with Java, Kotlin, Spring Boot, and AWS on a daily basis.

Tom Hombergs width=”150″ height=”150″ />

Tom Hombergs

Tom is a senior software engineer at Atlassian in Sydney, working with AWS and Spring Boot at scale. He is running the successful software development blog reflectoring.io, regularly writing about Java, Spring, and AWS with the goal of explaining not only the “how” but the “why” of things. Tom is the author of Get Your Hands Dirty on Clean Architecture, which explores how to implement a hexagonal architecture with Spring Boot.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

Ricardo Sueiras

Ricardo Sueiras

Cloud Evangelist at AWS. Enjoy most things where technology, innovation and culture collide into sometimes brilliant outcomes. Passionate about diversity and education and helping to inspire the next generation of builders and inventors with Open Source.