General

Q: What is a modern application?

A modern application is the combination of modern technologies, architectures, software delivery practices, and operational processes that lead teams to deliver value more quickly, frequently, consistently, and safely. These applications typically take advantage of loosely coupled, distributed technologies and focus on event-driven, serverless components that allow teams to offload undifferentiated heavy lifting in order to spend more time on delivering value for their customers. A modern application also takes advantage of operational and security tooling to increase the reliability and consistency of deployments, while making it safe to deploy many times a day. The use of automation of infrastructure, security, and deployments allows the teams that own modern applications to move more quickly than if they were relying on manual processes or more significant operational management.

Q: How are companies building modern applications?

A modern application is typically built by shrinking the scope of the application to create better agility and simplify operational and risk concerns. Developers of these applications focus on selecting the right tool for the job to ensure the selected architecture appropriately matches the purpose of the application. For example, leveraging relational databases for data that is typically unified to show relationships like your address and your contact information versus a graph database to manage and visualize deep connected relationships like a person and their various groups of friends, family, and co-workers. While it may be possible to build the latter with a relational database, the graph database is best suited for that problem. From there, the goal is to minimize the operational management of the selected tools and leverage AWS building blocks as much as possible. Finally, companies are managing and maintaining modern applications through automation wherever possible. This makes modern applications lightweight, reliable, scalable, and secure, which allows the owners to deliver value more quickly and frequently.

Q: What’s unique or better about building modern applications on AWS?

AWS has a breadth of services and features that enable companies to build an inspiring variety of applications. Today, AWS offers a growing set of serverless capabilities like AWS Lambda, AWS Step Functions, Amazon API Gateway, and AWS Fargate that allow developers to offload undifferentiated heavy lifting. The ever increasing and maturing suite of AWS Developer Tools enable DevOps practices through the automation of infrastructure and deployments. There is also a simple environment for serverless development with AWS Cloud9. From build, to commit, to release, AWS offers a full suite of tools to enable continuous delivery with elasticity and without the operational concerns of historical offerings. Finally, AWS was one the earliest adopters of microservices architectures, and we have built a deep reservoir of expertise and tooling to enable organizations to build and manage a microservices architecture.

Architecture

Q: Where do you draw the boundaries of a microservice?

The most important factors to consider when drawing boundaries around your microservice are scope and dependencies. The goal should be to make the service small enough in scope that a single team (typically 5-10 people) can own, manage, scale, and deploy the service independently from other services or other teams. Boundaries are typically drawn and identified by the context of their capability and typically fall within a single data or business domain. The size of the service is not as important as the ownership and autonomy of the team that owns it. To use an example, a payment microservice that provides the ability to send or request a payment will be made up of storage, integration, and compute components that are bound by a service interface normally referred to as an API. A single team could effectively own and manage this service.

Q: Should I migrate to the cloud or break the monolith first?

Enterprises tend to begin their journey thinking they should refactor before migrating to the cloud. While there are cases that force refactoring before migrating, we recommend finding the fastest path to migrate with minimal work – minimum viable refactoring. The reality is that the final architecture and make up of your application will be very different if refactoring is done in the cloud versus on-premises. For example, you can refactor differently in the cloud by using AWS building blocks. You can also benefit from the on-demand availability of infrastructure in the cloud, which allows for safer refactoring and testing with reduced investment.

Q: How can modern architectures enable improved security?

Not only can microservices provide smaller systems that can be owned and operated by small, autonomous teams, the technologies available to build microservices also provide new opportunities for automation, scaling, and security.

AWS customer Travelex is a perfect example of how modern architecture can improve security. Travelex is trusted globally for currency exchange with presence in 130 countries. They wanted to move away from their monolithic on-premises data center model to release faster than their current pace (8 times per year) and improve security. Although they had decades of experience complying with financial regulations, seeking approval for a cloud workload was new.

Their design, with improved security, allowed them to gain approval and deploy a microservices architecture using Docker and Amazon Elastic Container Service as well as a security controls framework that includes AWS Key Management Service, Amazon VPC, and Amazon Web Application Firewall. They creating automated, auditable and tamper-proof deployments. And, to reduce the blast radius of any compromise they designed a process to destroy every container after 24-hours and redeploy with new security certificates to minimize the effect of sensitive configurations being lost or stolen. They are now able to deploy 100s of times per week as opposed to 8 times per year in the old low-frequency model and the automation they implemented has improved their overall security posture.

Culture/Organization

Q: How do I structure my teams in order to enable ownership and autonomy?

At Amazon, we have two pizza teams which get their name from their size – small enough to be fed by two pizzas, typically 5-10 people. These teams have complete ownership and autonomy over their applications and all the skills necessary to deliver – handoffs, cross-team communication, and dependencies are minimized. More and more, organizations are adopting agile and DevOps practices along with their move to the cloud. While these practices and technologies can certainly provide value independently, to truly unlock your agility and maximize value delivery, organizations combine these practices with the concept of two pizza teams to maximize speed and autonomy. At the end of the day, this sort of transformation requires different behaviors and different structures to maximize the effectiveness of the two pizza teams and therefore, typical shared services models tend to shrink in favor of teams that can own and operate what they build.

For example, Cox Automotive embarked on a transformation of their people, process, and technology by implementing scaled agile framework (SAFe) across the entire enterprise, going all-in with AWS, and evolving to a “You build it, you run it” environment. Cox Automotive built out small teams that used Scrum, Kanban or other agile methodologies and teams were typically 8-10 people in size to drive ownership and autonomy, very similar to Amazon’s two-pizza teams. Implementing a coordinated delivery, operations, and technology strategy allowed Cox Automotive to unify IT and the business by creating a team of product, engineering, architecture, and business leaders at the upper levels of SAFe, resulting in a more transparent and collaborative environment. This model provided connectivity of priorities and gave teams context as they looked to solve customer problems. 

Q: How do I retrain my teams?

Many companies construct a continuing educational program that builds upon the individual and the organization’s ever changing maturity. When you're first adopting cloud, we recommend foundational training that focuses on your secure landing zone, cost management, and cloud services and concepts like serverless. From there, you should leverage Well Architected Reviews to encourage learning and architectural evolution throughout your organization. Once your cloud foundation is established, you can begin to tackle your software development lifecycle and automation, moving towards an agile approach that embraces DevSecOps. Trainings can vary based on your starting point.

Q: What are the foundational elements necessary to shift from a transitional architecture and organizational structure to a more strategic position?

Prior to executing on a strategic position, we tend to see organizations focusing on leadership engagement, buy-in, and change. Transformations tend to fail or succeed as a direct result of leadership behavior and support. Many organizations dabble in cloud, agile development, and DevOps practices before embarking on a more strategic transformative effort. Training is a key aspect of adopting new technology or practices but ultimately, changing behaviors and moving from gates to guardrails will increase business agility. Consider today’s highways. They have marked lanes, onramps and exits, passing lanes and speed limits. Cars are free to enter and exit as they see fit, day or night, and ultimately travel much faster than you can otherwise. This is the type of environment we aim to create for our teams. A place where the rules are known and consistently enforced in a way that allows for speed and flexibility. Having a firm grasp on the activities and beliefs necessary to make that a reality is a critical starting point for leaders.

Q: What are the cultural shifts that have to happen before we can meaningfully adopt a microservices architecture?

Ideally, organizations would begin by changing their organizational structure and delivery approach to consist of small teams with autonomy and ownership over products or services. While you can adopt microservices without changing your culture or organizational structure, gaining service adopters and preventing deduplication may be tricky. Without a cohesive strategy or delivery model, results will vary. There isn't a right way to approach changes but there are certainly many wrong ways. Start by focusing on what you are trying to achieve. What is the problem you are trying to solve and how will you know if you have successfully solved it? Start small from there and keep iterating.

Software Delivery

Q: How do I start introducing CI/CD for my team?

As you progress in your modernization journey, using continuous integration (CI) and continuous deployment (CD) tools helps to minimize the risk associated with rapid change. In practice, most organizations begin with continuous integration and may still manually deploy to production for their first few applications, leaving a final human check in place to gate the release process. You should start thinking about CI when you have a handful of services communicating with each other, or a complex integration into your existing applications that require specialized communication. As your practices mature and automation increases, organizations typically become more confident in their releases and more comfortable with automated safety checks. This is typically when a transition can be made to CI/CD, removing the manual steps.

Q: What are the keys to speeding up value delivery?

The simplest way to increase speed is to do less. Organizations that become disciplined in choosing what to work on and relentlessly learning and revising that prioritization, along with focusing their time, energy and money on the most valuable activities will excel in speeding up value delivery. There are several tools, techniques and technologies to help, starting with creating continuous delivery pipelines that automate your infrastructure, security, deployment, testing, and rollback. This helps increase deployment consistency, reduce mean time to repair and change failure rates while increasing deployment frequency. In the cloud, you can offload operations and maintenance while selecting the right tool for the job. This helps you focus on value while avoiding square peg, round hole situations. You can choose fully-managed services and serverless offerings, like AWS Lambda, AWS Fargate, or Amazon DynamoDB, to reduce operations and maintenance. By focusing on the right tool for the job and minimizing undifferentiated work, you can spend more time building solutions that drive value for your business and your customers. In addition, by simplifying your solutions, you can decrease the cost of change and increase the speed at which you pivot and/or iterate.

Operational model

Q: How do I decide when to use Lambda and/or containers?

More and more customers are choosing AWS Lambda and containers to build modern application with maximum agility. Your choice depends largely on the complexity of your workload, typical task runtime as well as its invocation pattern. Containers are the most popular option for packaging code and a great tool for modernizing legacy applications, because they offer excellent portability and flexibility over application settings. With Lambda, you get the most simplicity. The only code you write is business logic.

Containers provide a standard way to package code, configurations, and dependencies into a single object so you can run your it anywhere, scale quickly, and also set CPU and memory utilization granularly. In fact, the majority of container workloads are all run on AWS. To run containers, you need compute, which can be Amazon EC2 or AWS Fargate. Fargate enables you to run containers serverlessly. You also need an orchestration service, such as Amazon ECS or Amazon EKS.

Lambda runs code in response to events or triggers, such as a new file added to S3 or new table entry in DynamoDB, or directly through calling its APIs. Lambda has 115 event trigger, more than anyone else in the market. Customers just upload their code and Lambda takes care of everything required to run and scale the code with high availability. They only pay for the compute time they consume - there is no charge when code is not running. Customers can run code for virtually any type of application or backend service - all without provisioning or managing servers.

Q: Which container orchestration service should I choose?

This depends on your current expertise and preference for operational ease or control over application settings. To minimize operations, we suggest AWS Fargate, which is the only compute engine that allows you to run your containers without having to manage servers. With Fargate, you build your container image, define how and where you want it to run, and pay for the resources. This eliminates the need for you to choose the right instance type, secure, patch, and upgrade the instances or scale your cluster.

Amazon ECS is the best place to run your containers if you’re familiar with AWS constructs and intend to primarily use AWS tools and services as part of your container infrastructure. Because we built ECS from the ground-up and have complete control of its roadmap, we are able to quickly and natively integrate with AWS services such as IAM, CloudWatch, Autoscaling, Secrets Manager, and Load Balancers while providing you a familiar experience to deploy and scale your containers.

If you prefer to run Kubernetes, then we recommend Amazon EKS, the most reliable way to run Kubernetes in the cloud. EKS is reliable because it runs across multiple AWS availability zones, resulting built-in redundancy and resiliency it. We also make sure our EKS customers have the latest security patches, which means we’ll we take fixes out of the most recent version and apply it to the unsupported versions to prevent reliability and availability issues with your EKS application.

Q: When do I choose serverless technologies over managing it myself?

Unless managing infrastructure and operations are a core competency of your business, we suggest offloading that work to focus on innovation that benefits your customers. Many of our customer have taken a serverless-first approach, using serverless technologies unless there is a compelling reason not to do so. Serverless technologies allow your developers to focus on the logic of your business, removing the complexity of managing the underlying infrastructure. If you have a good knowledge of the capabilities that make up your application, serverless allows you to decompose it into separate components and allow your developers to focus on the outcome.

Q: Will using serverless technologies affect my multi-cloud strategy?

The decision to use serverless technologies is part the overall decision to select a cloud provider, and doesn’t tend to have incremental impact on multi-cloud strategy. As companies plan their cloud strategy, they often start with the belief that they're going to split their workloads in the cloud relatively evenly among two or three providers. But when they get into the practicality of assessing it, very few end up going that route. Most predominantly pick one provider for several reasons. For one, running on multiple clouds is difficult – it forces development teams to be fluent in multiple cloud platforms. It also forces teams to standardize on the lowest common denominator platform rather than taking advantage of the broader features of a specific provider. Finally, it spreads workloads amongst cloud providers, reducing your buying power and ability to get volume discounts. So, the vast majority of customers predominantly pick a single infrastructure or cloud provider.

Security

Q: How does security improve with modern applications?

A key component of modern application design is incorporating security “early and often” into the development process. At ideation, design, development, testing and launch, security, risk, and compliance factors are considered. Automation of key security controls, as well as security triggers in the CI/CD pipeline, not only account for security in the development process, but eliminate the need for manual intervention (testing, vs. remediation).

Q: How do security practices and options change with the cloud and modern applications?

In modern applications security is now code, just as infrastructure is code. The ability to automate the inclusion of security controls and monitoring of the infrastructure is a game changer for many organizations. This allows for the design and build of self-healing infrastructures. Additionally, with the use of serverless technologies, the attack surface of applications is greatly reduced, since any potentially vulnerable code is only running when needed.

For example, Verizon was migrating an additional 1,000 business critical applications and databases to AWS and needed a way to scale their security validation processes. They needed to put direct access to cloud resources in the hands of thousands of developers, not just infrastructure teams. Verizon also knew they needed to unleash their developers and allow them to focus more time on delivering new value for their customers, not waiting on hardware and security checks. So, they developed an automated security validation process that allowed team to develop new applications on their own timeline. In stage one, the process analyzes basic configuration rules BEFORE using them to deploy, checking for encryption, logging, access rights config. In stage two, it deploys the validated template into a test environment for live validation and uses the AWS native config service to audit deployed system configs. Finally in stage three, it runs a 3P Vulnerability assessment to check for missing OS patches or other vulnerabilities. Once complete, it digitally signs the results and their automated deployment pipeline checks to determine whether the application is approved for production or not, and only deploys those that have passes this validation process.

Verizon’s automated security validation process is an example of how customers are using automation and the AWS cloud to increase business agility by creating autonomy within their teams, allowing them to safely move quickly. 

Q: What am I actually securing in a serverless environment? Do my tools and processes change?

Your focus is twofold: 1 - securitization of the application code that is developed/running, in accordance with best practices (e.g. OWASP top 10); and 2 - securitization of the infrastructure you control, aligning with cloud best practices focusing on identity, detective controls, infrastructure security, data protection and incident response.

Q: How do I implement my existing security policies in this new world?

Many companies succeed by focusing on their security outcomes or control objectives and ensure those are met rather than equating security with a specific product or service. This typically involves rewriting security policies to be broader and more focused on security and compliance objectives. At the run book level, the details of how that platform’s controls meet or exceed the security objective described in the relevant policy.

Data management

Q: How do I evaluate which database is the best fit for purpose?

Customers tell us they want to build scalable, high-performing, and functional applications that meet specific performance and business requirements. When choosing a database for an application, customers should take into account these requirements as well as the data model and data access patterns.
In order to meet these diverse customer needs, we offer a host of purpose-built database services:

Amazon RDS for fully-managed relational databases and Amazon Aurora for commercial-grade relational databases as well as an ever-improving feature set such as Amazon Aurora Serverless.
Amazon DynamoDB, a key-value and document database that delivers single-digit millisecond performance at any scale
Amazon Neptune for graph databases
Amazon DocumentDB, a fully-managed document database that supports MongoDB workloads
Amazon Timestream, a time-series database service for Internet of Things (IoT) and operational applications
Amazon Quantum Ledger Database (QLDB), a purpose-built ledger database
Amazon Aurora Global Database, spanning multiple AWS Regions while replicating writes with a typical latency of less than one second.

Q: I have a legacy database and a long term licensing agreement. How do I get started on the process of migrating to a more modern database?

Database Freedom is a unique program designed to assist qualifying customers migrating from traditional database engines to cloud-native ones on AWS. Database Freedom supports migrations to Amazon Aurora - a MySQL and PostgreSQL - compatible relational database built for the cloud, Amazon RDS for PostgreSQL, MySQL and MariaDB, Amazon Redshift, Amazon DynamoDB, Amazon EMR, Amazon Kinesis, Amazon Neptune, Amazon QLDB, Amazon Timestream and Amazon DocumentDB. Additionally, AWS Schema Conversion Tool and AWS Database Migration Service can help customers migrate their databases to these services quickly and securely.
We offer qualifying customers advice on application architecture, migration strategies, program management, and employee training customized for their technology landscape and migration goals. We also support proof-of-concepts to demonstrate the feasibility of a migration.

We also assist qualifying customers in migrating to AWS through our AWS Professional Services team and our network of Database Freedom Partners. These teams and organizations specialize in a range of database technologies and bring a wealth of experience acquired by migrating thousands of databases, applications, and data warehouses to AWS. We also offer service credits to qualifying customers to minimize the financial impact of the migration.

We have helped customers such as 3M, Verizon, Capital One, Intuit, Ryanair and Amazon.com achieve database freedom.

Find out more about Database Freedom and contact us by going here.