Category: SaaS on AWS

Testing SaaS Solutions on AWS

by Tod Golding | on | in AWS Partner Solutions Architect (SA) Guest Post, How-to Guide, SaaS on AWS | | Comments

Tod Golding is a Partner Solutions Architect (SA) at AWS. He is focused on SaaS. 

The move to a software as a service (SaaS) delivery model is often motivated by a fundamental need for greater agility and customer responsiveness. SaaS providers often succeed and thrive based on their ability to rapidly release new features without compromising the stability of their solutions. Achieving this level of agility starts with a commitment to building a robust DevOps pipeline that includes a rich collection of automated tests. For SaaS providers, these automated tests are at the core of their ability to effectively assess the complex dimensions of multi-tenant load, performance, and security.

In this blog post, we’ll highlight the areas where SaaS can influence your approach to testing on AWS. In some cases, SaaS will simply extend your existing testing models (load, performance, and so on). In other cases, the multi-tenant nature of SaaS will introduce new considerations that will require new types of tests that exercise the SaaS-specific dimensions of your solution. The sections that follow examine each of these areas and provide insights into how expanding the scope of your tests can add value to SaaS environments.

SaaS Load/Performance Testing

In a multi-tenant universe, your tests go beyond simply ensuring that your system is healthy—tests must also assure that your system can effectively respond to unexpected variations in tenant activity that are commonly associated with SaaS systems. Your tests must be able to verify that your application’s scaling policies can respond to the continually changing peaks and valleys of resource consumption associated with SaaS environments. The reality is, the unpredictability of SaaS loads combined with the potential for cross-tenant performance degradation makes the bar for SaaS load and performance testing much higher. Customers will certainly be unhappy if their system’s performance is periodically affected by the activities of other tenants.

For SaaS, then, the scope of testing reaches beyond performance. It’s about building a suite of tests that can effectively model and evaluate how your system will respond to the expected and the unexpected. In addition to ensuring that customers have a positive experience, your tests must also consider how cost efficiently it is achieving scale. If you are over-allocating resources in response to activity, you’re likely impacting the bottom line for the business.

The following diagram represents an idealized representation of how SaaS organizations prefer to model the connection between load and resource consumption. Here, you see actual tenant consumption in blue and the allocated resources in red. In this model, you’ll notice that the application’s resources are allocated and deallocated in lockstep with tenant activity. This is every SaaS architect’s dream. Here, each tenant has a positive experience without over-committing any resources.

The patterns in this chart represent a snapshot of time on a given day. Tomorrow’s view of this same snapshot could look very different. New tenants may have signed up that are pushing the load in entirely new ways. This means your tests must consider the spectrum of load profiles to verify that changes in tenant makeup and application usage won’t somehow break your scaling policies.

Given this consumption goal and the variability of tenant activity, you’ll need to think about how your tests can evaluate your system’s ability to meet these objectives. The following list identifies some specific areas where you might augment your load and performance testing strategy in a SaaS environment:

  • Cross-tenant impact tests – Create tests that simulate scenarios where a subset of your tenants place a disproportionate load on your system. The goal here is to determine how the system responds when load is not distributed evenly among tenants, and assess how this may affect overall tenant experience. If your system is decomposed into separately scalable services, you’ll want to create tests that validate the scaling policies for each service to ensure that they’re scaling on the right criteria.
  • Tenant consumption tests – Create a range of load profiles (e.g., flat, spikey, random) that track both resource and tenant activity metrics, and determine the delta between consumption and tenant activity. You can ultimately use this delta as part of a monitoring policy that could identify suboptimal resource consumption. You can also use this data with other testing data to see if you’ve sized your instances correctly, have IOPS configured correctly, and are optimizing your AWS footprint.
  • Tenant workflow tests – Use these tests to assess how the different workflows of your SaaS application respond to load in a multi-tenant context. The idea is to pick well-known workflows of your solution, and concentrate load on those workflows with multiple tenants to determine if these workflows create bottlenecks or over-allocation of resources in a multi-tenant setting.
  • Tenant onboarding tests – As tenants sign up for your system, you want to be sure they have a positive experience and that your onboarding flow is resilient, scalable, and efficient. This is especially true if your SaaS solution provisions infrastructure during the onboarding process. You’ll want to determine that a spike in activity doesn’t overwhelm the onboarding process. This is also an area where you may have dependencies on third-party integrations (billing, for example). You’ll likely want to validate that these integrations can support their SLAs. In some cases, you may implement fallback strategies to handle potential outage for these integrations. In these cases, you’ll want to introduce tests that verify that these fault tolerance mechanisms are performing as expected.
  • API throttling tests – The idea of API throttling is not unique to SaaS solutions. In general, any API you publish should include the notion of throttling. With SaaS, you also need to consider how tenants at different tiers can impose load via your API. A tenant in a free tier, for example, may not be allowed to impose the same load as a tenant in the gold tier. The main goal here is to verify that the throttling policies associated with each tier are being successfully applied and enforced.
  • Data distribution tests – In most cases, SaaS tenant data will not be uniformly distributed. These variations in a tenant’s data profile can create an imbalance in your overall data footprint, and may affect both the performance and cost of your solution. To offset this dynamic, SaaS teams will typically introduce sharding policies that account for and manage these variations. Sharding policies are essential to the performance and cost profile of your solution, and, as such, they represent a prime candidate for testing. Data distribution tests allow you to verify that the sharding policies you’ve adopted will successfully distribute the different patterns of tenant data that your system may encounter. Having these tests in place early may help you avoid the high cost of migrating to a new partitioning model after you’ve already stored significant amounts of customer data.

As you can see, this test list is focused on ensuring that your SaaS solution will be able to handle load in a multi-tenant context. Load for SaaS is often unpredictable, and you will find that these tests often represent your best opportunity to uncover key load and performance issues before they impact one or all of your tenants. In some cases, these tests may also surface new points of inflection that may merit inclusion in the operational view of your system.

Tenant Isolation Testing

SaaS customers expect that every measure will be taken to ensure that their environments are secured and inaccessible by other tenants. To support this requirement, SaaS providers build in a number of policies and mechanisms to secure each tenant’s data and infrastructure. Introducing tests that continually validate the enforcement of these policies is essential to any SaaS provider.

Naturally, your isolation testing strategy will be shaped heavily by how you’ve partitioned your tenant infrastructure. Some SaaS environments run each tenant in their own isolated infrastructure while others run in a fully shared model. The mechanisms and strategies you use to validate your tenant isolation will vary based on the model you’ve adopted.

The introduction of IAM policies provides an added layer of security to your SaaS solution. At the same time, it can add a bit of complexity to your testing model. It’s often difficult to find natural mechanisms to validate that your policies are performing as expected. This is typically addressed through the introduction of test scripts and API calls that attempt to access tenant resources with specific emphasis on simulating attempts to cross-tenant boundaries.

The following diagram provides one example of this model in action. It depicts a set of resources (Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon DynamoDB items, and Amazon Simple Storage Service (Amazon S3) buckets) that belong to two tenants. To enforce isolation of these tenant resources, this solution introduces separate IAM policies that will scope and limit access to each resource.

With these policies in place, your tests must now validate the policies. Imagine, for example, that a new feature introduces a dependency on a new AWS resource. When introducing this new resource, the team happens to overlook the need to create the corresponding IAM policies to prevent cross-tenant access to that resource. Now, with good tests in place, you should be able to detect this violation. Without these tests, you have no way of knowing that your tenant isolation model is being accurately applied.

As part of isolation testing, you may also want to introduce tests that validate the scope and access of specific application and management roles. For example, SaaS providers often have separate management consoles that have varying levels of access to tenant data. You’ll want to be sure to use tests that verify that the access levels of these roles match the scoping policies for each role.

Tenant Lifecycle Testing

The management of SaaS tenants requires you to consider the full lifecycle of events that may be part of a tenant’s experience. The following diagram provides a sampling of events that are often part of the overall tenant lifecycle.

The left side of this diagram shows the actions that tenants might take, and the right side shows some of the operations that a SaaS provider’s account management team might perform in response to those tenant actions.

The tests you would introduce here would validate that the system is correctly applying the policies of the new state as tenants go through each transition. If, for example, a tenant account is suspended or deactivated, you may have policies that determine how long data is retained for that tenant. These policies may also vary based on the tier of the tenant. Your tests would need to verify that these policies are working as expected.

A tenant’s ability to change tiers also represents a good candidate for testing, because a change in tiers would also change a tenant’s ability to access features or additional resources. You’ll also want to consider the user experience for tier changes. Does the tenant need to log out and start a new session before their tier change is recognized? All of these policies represent areas that should be covered by your tier tests.

Tier Boundary Testing

SaaS solutions are typically offered in a tier-based model where SaaS providers may limit access to features, the number of users, the size of data, and so on based on the plan a tenant has selected. The system will then meter consumption and apply policies to control the experience of each tenant.

This tiering scheme is a good candidate for testing in SaaS environments. SaaS teams should create tests that validate that the boundaries of each tier are being enforced. This typically requires simulating configuration and consumption patterns that will exceed the boundary of a tier and validating that the policies associated with that boundary are correctly triggered. The policies could include everything from limiting access to sending notifications.

Fault Tolerance Testing

Fault tolerance is a general area of concern for all solutions. It’s also an area that is addressed in depth by the industry with solid guidance, frameworks, and tools. The bar for fault tolerance in SaaS applications is very high. If your customers are running on shared infrastructure and that environment is plagued by availability problems, these problems will be visible to your entire population of customers. Naturally, this can directly impact your success as a SaaS provider.

It’s beyond the scope of this blog post to dig into the various strategies for achieving better fault tolerance, but we recommend that you add this to the list of testing areas for your SaaS environment. SaaS providers should invest heavily in adopting strategies that can limit or control the scope of outages and introduce tests that validate that these mechanisms are performing as expected.

Using Cloud Constructs

Much of the testing that we’ve outlined here is made simpler and more cost effective on AWS. With AWS, you can easily spin up environments and simulate loads against those environments. This allows you to introduce tests that mimic the various flavors of load and performance you can expect in your SaaS environments. Then, when you’re done, you can tear these environments down just as quickly as you created them.

Testing with a Multi-Tenant Mindset

SaaS multi-tenancy brings with it a new set of load, performance, isolation, and agility considerations—each of which adds new dimensions to your testing mindset. This blog post provided a sampling of considerations that might shape your approach to testing in a SaaS environment. Fortunately, testing SaaS solutions is a continually evolving area with a rich collection of AWS and partner tools. These tools can support your efforts to build a robust testing strategy that enhances the experience of your customers while still allowing you to optimize the consumption of your solution.

Generating Custom AWS CloudFormation Templates with Lambda to Create Cross-Account Roles

by Ian Scofield | on | in AWS CloudFormation, AWS Lambda, How-to Guide, SaaS on AWS, Security | | Comments

Ian Scofield is a Partner Solutions Architect (SA) at AWS. 

In a previous post in our series, we showed how to use an AWS CloudFormation launch stack URL to help customers create a cross-account role in their AWS account. As mentioned in an earlier APN Blog post, a cross-account role is the recommended method to enable access to a customer account for partner integrations, and creating the role using a CloudFormation template instead of following a long series of manual steps can reduce failure rates and improve the customer onboarding experience.

In this post, we will explore the use of custom CloudFormation templates to further streamline the onboarding process.

Recall that the CloudFormation template in our previous example was static and required the customer to enter a 12-digit AWS account ID and an arcane value called an external ID. Of course, omitting or entering incorrect values results in a failed CloudFormation launch, or, worse, a useless cross-account role sitting in the customer account.

Since we already know the values of these two parameters (the partner’s AWS account ID is the parameter we want the customer to trust, and the external ID is a unique value we generate for each customer), it makes sense for us to automate template creation and set these values ahead of time on behalf of the customer.

About external IDs

The external ID is a piece of data defined in the trust policy that the partner must include when assuming a role.  This allows the role to be assumed only when the correct value is passed, which specifically addresses the confused deputy problem.  External IDs are a good way for APN Partners to improve the security of cross-account role handling in a SaaS solution, and should be used by APN Partners who are implementing products that use cross-account roles.  For a deeper look into why external IDs are important and why APN Partners should use them, take a look at How to Use External ID When Granting Access to Your AWS Resources on the AWS Security Blog.

There are many methods for setting default values in CloudFormation templates. We’ll discuss two of these. Keep in mind that although this blog post focuses on cross-account role creation, the method of populating parameters on the fly can be used for any other components within the template.  Depending on the parameter in question, one of the methods we discuss might be a better fit than the other.

The first method is to supply the partner’s account ID and external ID as the default values to CloudFormation parameters. The customer can inspect and potentially overwrite parameter values in the CloudFormation console before launching the template (Figure 1).  In some cases, this level of transparency might be required so the customer is aware of the AWS Account ID they are granting access to.

However, as noted previously, incorrect values will result in the CloudFormation stack failing to launch or associate correctly, so any customer modifications to these parameters are likely to result in a failure.

Figure 1: Using default parameter values

The second method (Figure 2) doesn’t expose any parameters to the customer; instead, it hard-codes the partner’s account ID and external ID directly into the resources in the template. This helps ensure the successful creation of the role and association with the partner account, while removing any additional work for the customer.

Figure 2: Hardcoding parameter values

In both of these scenarios, how do you insert values that are unique for each customer into the template? In order for either method to work, you have to create a custom template for each customer with their unique values. This requires some additional steps in your onboarding workflow; however, the simplicity it provides to the customer and reduced chances of failure can outweigh the initial setup on your side.

To demonstrate this scenario, I created a mock portal to handle the customer onboarding experience:

Figure 3: Mock portal for onboarding

The portal requires the customer to provide their AWS account ID to associate with the uniquely generated external ID. When the user clicks Generate Custom Template, the account ID is sent to your application and invokes an AWS Lambda function. In my example, I’m using Amazon API Gateway to invoke the function, which does the following:

1. Puts an entry with the account ID into an Amazon DynamoDB table. This allows you to track customers and associate the cross-account role we’ll create later with the AWS account ID. You can also store the external ID and any other information pertaining to the customer in the DynamoDB table.

2. Generates a customized template for the user from a master template. The master template has all the necessary information and some placeholder values that you substitute with customized values:

         - Action: 'sts:AssumeRole'
           Effect: Allow
             AWS: <TRUSTED_ACCOUNT>
               sts:ExternalId: <EXTERNAL_ID>
           Sid: ''
         Version: '2012-10-17'
       Path: "/

The Lambda function downloads the template and uses a simple replace() function to replace the placeholder strings with the unique values you’ve generated for this customer.

3. Uploads the customized template to an S3 bucket with the customer’s account ID prepended to the file name to correlate templates with specific customers.

4. Sends back the S3 URL for the custom-generated template, and then displays a Launch Stack button on the portal for the customer to begin the onboarding process.

Figure 4: Launch UI

At this point, the customer clicks the Launch Stack button and begins the onboarding process for their AWS account.  The stack creates the cross-account role with the provided policy embedded in the template, without the customer having to copy and paste policy documents and manually go through the role creation process.

There are a few outstanding items that would make this solution simpler still.  How does the partner get the Amazon Resource Name (ARN) for the cross-account role we created?  What happens to the custom template in the S3 bucket? What if the customer tears down the template without notifying the partner?  We’ll continue to expand on this topic through future posts. See post 3 in our series here.

Let us know if you have any questions or comments!


Announcing SaaS Contracts, a Feature to Simplify SaaS Procurement on AWS Marketplace

by Brad Lyman | on | in AWS Marketplace, AWS Product Launch, SaaS on AWS | | Comments

Since the inception of AWS Marketplace, we’ve prioritized customer and seller feedback as our starting point for driving Marketplace improvements. In response to customers and sellers pointing to software as a service (SaaS) as their preferred software delivery mechanism, we launched AWS Marketplace SaaS Subscriptions in November 2016. SaaS Subscriptions enables sellers to offer their SaaS solutions directly to AWS customers, with all charges consolidated on the customer’s bill alongside other services bought directly from AWS or through AWS Marketplace.

Our goal is to continue to enable SaaS sellers and drive additional value for customers. Today, we’re excited to announce the launch of the AWS Marketplace SaaS Contracts, which allows sellers to offer monthly, one, two, and three year contracts for SaaS and application programming interface (API) products.

What’s the benefit of SaaS Contracts for customers?

This new capability gives AWS customers more options and flexibility in how they procure software through AWS Marketplace. Customers can use a shopping-cart like experience to determine the number of included units and the duration of their contract. Customers can take advantage of potential cost savings resulting from longer-term contracts and will have the ability to expand their subscriptions at any time. Customers that purchase monthly contracts can expand to a one, two, or three year contract term as needed, and can take advantage of automatic, configurable renewals. Today, customers can now subscribe to over 70 SaaS products, giving AWS customers even greater selection.

How does this impact sellers?

SaaS Contracts provides sellers even more options for monetizing their solutions to AWS customers. In addition to the pay-as-you-go options provided by SaaS Subscriptions, sellers can now provide services that require up-front payment or offer discounts for committed usage amounts. Sellers can offer customers a monthly option – good for customers that want to test software before making a longer commitment. From here, a buyer can easily upgrade to a one, two, or three year contract term. With simple auto-renewal options for customers, it’s easier to manage your ongoing relationship with customers. After creating a new contract, Marketplace buyers are passed to the seller’s website, along with an encrypted token containing their customer identifier and product code. This experience is identical to the registration process for AWS Marketplace SaaS Subscriptions. Sellers use the customer identifier to check the customer’s entitlement by calling the AWS Marketplace entitlement service at any time, meaning sellers can rely on AWS Marketplace to serve as their primary entitlement store.

How do I get started as a seller?

We’ve made it simple for you to deliver your solution as a SaaS offering through AWS Marketplace. Once you have established your AWS Marketplace Seller account, you’ll need to select your billing dimension. You can choose from the existing options (users, hosts, data, units, tiers, bandwidth or requests) or request an additional dimension.  You can also define multiple price points (called variants) within this dimension (for example, admin, power, and read-only users within the user category). To get started with your listing, log into the AWS Marketplace Management Portal and navigate to the “Listings” tab. To create a new SaaS listing, download and fill out the product load form for SaaS Contracts. Define your category, variants, pricing, and other listing data and submit it to AWS Marketplace once you are ready. You will receive a limited, preview version of your listing to test against before the listing is published live to the AWS Marketplace website.

Next, you’ll need to modify your registration page to receive the token containing the customer identifier and product code. You’ll also have to modify your application to call the new AWS Marketplace Entitlement Service to check the size and duration of your customer’s contract. You can download the AWS software development kit (SDK) that will help you format your metering records in any of the AWS supported languages. As a final step, you can choose to listen to notifications on a Simple Notification Service (SNS) topic for when your customers modify their contract. You can find more information about the steps necessary to modify your application in the AWS Marketplace SaaS Seller Integration guide, or reach out to your AWS Category Manager to connect with a solutions architect to help you with the process.

How do I learn more?

At launch, AWS Marketplace SaaS Contracts features products from 20 sellers: Alert Logic, AppDynamics, Box, Cloudberry Labs, CloudHealth, Davra Networks, Device Authority, Flowroute, Informatica, Lucidchart, Mnubo, NetApp, Pitney Bowes, Simularity, Splunk, SumoLogic, ThingLogix, Threat Stack, Trend Micro, and TSOLogic. Over the next few months, we expect more than a dozen additional sellers to release products. Visit here to see all the SaaS products available on AWS Marketplace.

To learn more about selling your product as a SaaS solution, or how to modify your product to become a SaaS solution, be sure to visit

ISVs on AWS – January 2017 Highlights

by Kate Miller | on | in APN Technology Partners, AWS Competencies, SaaS on AWS | | Comments

By Terry Wise, Global Vice President, Channels & Alliances at AWS

As VP, Channels & Alliances for AWS, I have the great fortune of getting to meet with APN Partners from around the world who, through their innovative use of AWS and forward-thinking approach to software development, are fundamentally changing the way that Enterprise customers take advantage of software and tooling to drive digital transformation and business outcomes. A key highlight for me this past year was getting to meet with a large number of our APN Technology Partners, who provide software solutions that are either hosted on or integrate with AWS, and learn how they’ve been able to take advantage of new selling avenues and opportunities not previously possible.

Today, the APN is comprised of thousands of Technology Partners around the world. Our Technology Partner base is growing rapidly year-over-year, as more and more ISVs look to take advantage of the benefits of deploying their software on AWS, particularly as software as a service (SaaS). ISVs on AWS are changing the way they bring value to customers particularly through their embrace of DevOps and automation, and development of SaaS.

To help you learn more about the different ways ISVs are driving success for customers on AWS, this year we’re kicking off a new monthly blog series where we highlight four to five ISVs who are consistently evolving their solutions by taking advantage of what AWS has to offer. Today, I’m excited to kick our series off by highlighting four AWS Technology Partners who’ve deployed SaaS solutions on AWS: Codeship, CrowdStrike, Freshdesk, and Loggly.

Codeship, an AWS DevOps Competency Partner

Who is Codeship?

codeshipCodeship’s vision is simple: to build for the builders. “We want to enable software engineers, designers, and everybody who wants to craft great software by providing them with the means to speed up their development workflows, learn from their users and create better products faster,” says Moritz Plassnig, CEO/Founder of Codeship.

What has Codeship built?

The company’s current product offering is a cloud-native, hosted Continuous Integration & Delivery platform. The product tests and deploys the code your software teams create on optimized, dedicated infrastructure in the cloud, thus letting you outsource one of the most time consuming by-products of creating quality software. “With Codeship, you can get a fully managed SaaS platform that offers adaptability and customizability, freeing up valuable engineering resources to dedicate them to what they are used best on: Your product,” explains Plassnig.

Why AWS?

“Nothing is more important to an early-stage startup than iterating and moving fast. It’s all about building something your first customers will truly love. To achieve that, you have to focus on the customer one-hundred percent. Everything you do should get you closer to that magical moment where your early users are unbelievably amazed by your product. Spending even a single second on building the underlying infrastructure instead of using AWS makes zero sense for us. AWS allows us to focus on delighting our customer, and gives us the peace of mind that our infrastructure will be taken take of,” says Jim Schley, VP Engineering at Codeship. The company’s SaaS solution integrates with multiple AWS services including Amazon EC2, Elastic Beanstalk, Amazon EC2 Container Service (ECS), and AWS CodeDeploy.

Customers on AWS

Codeship has driven success for many customers on AWS across multiple industries, such as Digital Media, Financial Services, Retail and Software. One of Codeship’s customers is CloudSight, a company specializing in image recognition and mobile visual search who offers an image recognition API and built a cutting-edge tech stack to support their customers. The company uses Codeship to leverage multiple microservices, Docker, and Amazon ECS.

How can customers, and APN Partners, get started using Codeship?

Because Codeship’s product is offered as SaaS, it’s easy to get started. Go to, create a free account, spend a couple of minutes configuring your testing environment and project settings, and then you’re ready to go. Codeship offers a free tier for all of its products that allow developers to try the product out extensively before committing to a paid plan.

Want to learn more?

Codeship runs a highly trafficked blog featuring great technical content and thought leadership pieces. You can view the blog at

The company also offers a wide range of free educational resources including eBooks and webinars in the Codeship Library.


Who is CrowdStrike?


CrowdStrike’s mission is straightforward: to stop breaches. “When CrowdStrike was started in 2011, there were a ton of headlines about security breaches that couldn’t be solved by existing malware-based defenses. Our co-founders realized that a brand new approach was needed, so they created Falcon Host. Falcon is all about detecting threats that were previously undetectable. 6 years later, CrowdStrike sees over 27 billion endpoint events a day. All of those events get analyzed by CrowdStrike Threat Graph™, the brain that powers CrowdStrike Falcon,” says Josh Karp, Director of Global Technology Alliances at CrowdStrike.


What has CrowdStrike built?

CrowdStrike is a pioneer in next-generation endpoint protection, and unifies three crucial elements for customers: next-generation antivirus, endpoint detection and response (EDR), and a 24/7 managed hunting service — uniquely delivered via the cloud in a single lightweight sensor. CrowdStrike’s SaaS-based Falcon™ platform stops breaches by preventing and responding to all types of attacks– both malware and malware-free. The Falcon platform uses the patent-pending CrowdStrike Threat Graph™ to analyze and correlate billions of events in real time, providing complete protection and five-second visibility across all endpoints.

CrowdStrike Falcon was designed for the Cloud from the ground up. To deliver deep threat visibility for its global customer base, CrowdStrike Falcon leverages AWS secure cloud infrastructure. In addition to protecting on-premises endpoints, CrowdStrike Falcon Host also provides real-time protection and visibility for workloads running on Amazon Elastic Compute Cloud (Amazon EC2) to help organizations stop breaches. The lightweight Falcon sensor can be installed on both Windows and Linux operating systems hosted in EC2 in a matter of minutes. Watch this video to learn how to deploy CrowdStrike Falcon Host on Amazon EC2.

Why AWS?

“At CrowdStrike, we consider AWS to be a gold standard in cloud computing and we feel Falcon’s native cloud architecture is a perfect fit for customers moving their workloads to AWS. As we see more and more customers moving to the cloud, they are looking for a way to further protect their data and their endpoints. Deploying our SaaS-based endpoint protection platform gives those customers immediate visibility and threat protection,” says Karp.

“Working with AWS has been very rewarding,” explains Karp. “As an Advanced APN Partner, CrowdStrike has earned recognition for its rich set of APIs and has access to a variety of training, educational, and support resources.”

Customers on AWS

The CrowdStrike solution is offered as a subscription-based software as a service (SaaS) which means there is no hardware or software to install enabling rapid time-to-value for the company’s customers. Read a number of customer case studies here to learn more about how customers are using CrowdStrike.

How can customers, and APN Partners, get started using CrowdStrike?

To get started deploying CrowdStrike Falcon on AWS, visit Crowdstrike’s website here to request more information and a live demo.

“We work closely with firms, such as Systems Integrators to ensure our mutual customers receive expert advice to proactively reduce their cyber risks and provide immediate access to an accredited cyber security firm in the event of a compromise,” explains Karp. APN Consulting Partners looking to enable CrowdStrike for their customers can fill out this contact form.

Want to learn more?

For more information, click here.

Freshdesk, an AWS Marketing & Commerce Competency Partner

Who is Freshdesk?

Oa2xZ8S5Sq6PD6GP6nk6_logo-freshdeskFreshdesk is a leading cloud-based customer engagement solution that enables teams of all sizes to provide exceptional experiences to their customers. “We are laser focused on improving the entire customer support process and helping end-users engage with their favorite companies and brands more easily,” says Francesco Rovetta, VP of Alliances at Freshdesk.

What has Freshdesk built?

“Our flagship SaaS customer service solution, Freshdesk, was first released in 2011 and serves as a platform for our customers to find more and better ways to connect with their customers and the community,” explains Rovetta. Freshdesk is a multi-channel customer support solution: companies using Freshdesk are able to manage customer inquiries across channels, email, phone, social media, the Web, mobile apps and chat, in one central dashboard. It also enables customers to help themselves and each other by accessing a knowledge base and forums with customized content. “Our helpdesk is available both on the web and as a mobile application. It is easy to customize, includes built-in automations, integrations with many leading SaaS products and a complete set of APIs,” says Rovetta.

The company has four total products running on AWS. In addition to Freshdesk are Freshservice, a cloud-based service desk and IT service management solution;, an in-app support and engagement platform for mobile-first businesses; and Freshsales, a CRM solution and sales system for high-velocity sales teams.

Why AWS?

“We are an early adopter of AWS,” Rovetta explains. “Our main criteria for deploying the AWS Cloud were the infrastructure availability and scalability it offers with minimum latency. With these factors in mind, we transitioned to the AWS Cloud. The entire process took just a couple of weeks with zero downtime. Being able to get the job done quickly was a major factor in our ability to grow.” The company has experienced numerous benefits by moving to and running its SaaS solution on AWS. “With the help of AWS, we were able to quickly scale up millions of data points for thousands of businesses when our customer base grew exponentially. This would have certainly been a challenge with any other architecture,” says Rovetta. “In addition to the many instances we have in place today, when we reach high load, we can effortlessly increase capacity to handle the current load event. This makes it easier to handle large customers with more than 1,500 support agents. This kind of iteration at the hardware level is one of the greatest features of AWS, and it is only possible because AWS can provision (and deprovision) instances quickly.”

AWS further addressed Freshdesk’s concerns pertaining to security and data theft. “AWS certifications and architecture are top notch and they help us eliminate security gaps against possible breaches, dramatically reducing complexity and helping us shift focus to innovation without worrying about security threats,” says Rovetta.

Customers on AWS

A key customer for Freshdesk is Bridgestone Corporation, the world’s largest tire production and distribution company. “When looking at solutions, our use of AWS swung the needle in Freshdesk’s favor,” says Rovetta. Read more about Bridgestone and Freshdesk here.

How can customers, and APN Partners, get started using Freshdesk?

Customers interested in a free 30 day trial can visit

Freshdesk’s core focus is to build innovative products, and the company often comes across projects that require integration and implementation service work into the infrastructure of larger customers. “In these cases, we work with System Integration Partners to support our customers’ implementation needs,” says Rovetta. APN Consulting Partners can add AWS-powered Freshdesk to their portfolio, as they support customers in their move to the Cloud or during their transition to a modern SaaS customer support solution. More information can be found at

Want to learn more?

Visit the company’s website to learn more about getting started with Freshdesk.

Loggly, an AWS DevOps Competency Partner

Who is Loggly?

connector-loggly-logoThe team at Loggly focuses on empowering companies to gain insight from huge volumes of log data. The Loggly SaaS service makes it easy to collect, access and analyze mission-critical information in one place, accelerating troubleshooting, giving AWS customers a window into the health of their systems and revealing valuable business insights based on log data.

What has Loggly built?

“In the old days, log data was usually stored on local machines and could only be accessed by a few engineers,” says John Elkaim, vice president of marketing at Loggly. “Developers and sysadmins used tools like grep to analyze system logs from a single server and get that server back to health. In today’s world of distributed systems running in elastic cloud environments, companies need to take a much smarter and more efficient approach. The number of components generating log data and the volume of that data have exploded. People in development, DevOps, and technical support need to be able to see all logs from all systems in context, collaborate, share information, and quickly pull in subject matter experts when analyzing log data. The value of log management is measured not just by server uptime but by innovation speed and revenue impact.”

AWS logs are an integral part of understanding how applications run on AWS. Loggly offers agent-free log collection for a wide range of systems, applications, and Amazon services including Amazon CloudFront, AWS CloudTrail, Amazon CloudWatch logs, Amazon CloudWatch metrics, AWS Config, Amazon EC2, Amazon ELB logs, Amazon S3 logs, and Amazon SNS. Loggly customers can also permanently archive logs to Amazon S3 instances.

Loggly offers Application Packs for AWS CloudTrail and Amazon Classic ELB. These are pre-built dashboards and pre-set search patterns that help customers get immediate insights from the AWS logs they send to Loggly. Loggly recently published blog posts on the ELB Application Pack and the AWS CloudTrail Application Pack.


Why AWS?

“The relationship with AWS is crucial for Loggly, on both the technology and the business side. The powerful and continuously evolving technology offered by AWS, in combination with an excellent team, has helped Loggly to develop a first-class SaaS solution for our customers, most of whom are themselves AWS users,” says Elkaim. Loggly also found AWS to be key to building and deploying a successful SaaS solution. “Loggly is mission critical for thousands of customers around the globe, and that is why high availability and scalability are absolute must-haves for our SaaS solution,” says Manoj Chaudhary, CTO and vice president of engineering at Loggly. “AWS technologies like Amazon ElastiCache, Amazon RDS, and Amazon Route 53 make us stronger in these areas and do so much more efficiently than we could do ourselves. As a result, we can focus on what we know best: log management and analytics, our core competencies.”

Customers on AWS         

Joint customers include Bemobi, Creative Market, Molecule, XAPPmedia, and Citymaps.

How can customers, and APN Partners, get started using Loggly?

Loggly offers a free tier and doesn’t require agents. Just sign up for free at and configure your systems to start sending log data. Most AWS customers are able to do so within minutes.

The Loggly team feels that APN Consulting Partners can bring a lot of value to customers in helping them harness log data to drive success. “If your clients are embracing the vision of virtual capacity anytime and anywhere, complex microservices, elastic cloud capacity, or serverless architectures, they need a solid monitoring strategy that includes log management,” Elkaim adds. “We believe log data is the single common thread that reveals how users engage with applications, how components interact, and how complex systems perform. Operating software without a systematic approach to log management is like flying blind.”

Want to learn more?

Click here to learn more about log management with Loggly.


This blog is intended for educational purposes and is not an endorsement of the third-party product. Please contact the firms for details regarding performance and functionality.

How to Best Architect Your AWS Marketplace SaaS Subscription Across Multiple AWS Accounts

by Kate Miller | on | in AWS Marketplace, AWS Partner Solutions Architect (SA) Guest Post, SaaS on AWS | | Comments

This is a guest post from David Aiken. David is a Partner SA who focuses on AWS Marketplace.  

In my first post following the launch of AWS Marketplace SaaS Subscriptions, I provided a quick overview to describe the concepts, integration points, and how to get started with the AWS Marketplace SaaS Subscription feature. In this post, I walk through best practices for architecting your AWS Marketplace SaaS Subscription across multiple AWS accounts. Let’s begin!


Calls to the SaaS Subscriptions APIs, ResolveCustomer and BatchMeterUsage, must be signed by credentials from your AWS Marketplace Seller account. This does not mean that your SaaS code needs to run in the AWS MP Seller account. The best practice is to host your production code in a separate AWS account, and use cross-account roles and sts:AssumeRole to obtain temporary credentials which can then be used to call the AWS MP Metering APIs. This post walks you through how this can be implemented.


In our example, there are two AWS accounts:

  • AWS Marketplace Seller Account – this is the account your organization has registered as a seller in AWS Marketplace. API calls must be authenticated from credentials in this account.
  • AWS Accounts for Production Code – this is the AWS account where your SaaS service is hosted.

Why Use Separate Accounts?

Sellers should only use a single AWS Account as the AWS Marketplace account. This simplifies management and avoids any confusion for customers viewing an ISV’s products and services.

Separating the Seller account from the product accounts means each SaaS service can have its own AWS account, which provides a good security and management boundary. When a seller has multiple products, multiple AWS accounts can be used to further separate environments across teams.

Using different AWS Marketplace seller and production accounts

In this scenario, there are 2 AWS accounts in play. The AWS account registered as an AWS Marketplace Seller (222222222222) and the AWS account where the production code resides (111111111111).

Best Architect_AWS_Marketplace_SaaS_Subscriptions

The Seller Account is registered with AWS Marketplace and does have permissions to call the Metering APIs. The seller account contains an IAM Role, with the appropriate IAM Policy to allow access to the Metering API as well as the permission for the role to be assumed from the Production Account.

The IAM Role in the Seller Account in our example is called productx-saas-role. This has the AWSMarketplaceMeteringFullAccess managed policy attached. The IAM Role has a trust relationship as shown below:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:root"
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "someid"

The SaaS application is hosted in the Production Account. This account is not authorized to call the Metering APIs. This account contains an IAM Role and Policy which is attached to the EC2 instances running the hosting application via an EC2 Instance Profile. This provides the instance with temporary credentials which can be used to sign requests to AWS API calls. These temporary credentials are used to call the sts:AssumeRole method, which returns temporary credentials from the seller account. These are used to call the Metering API.

The permissions required to perform the sts:AssumeRole command are:

    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": "arn:aws:iam::222222222222:role/productx-saas-role"

In order for the application to make a call to the Metering API, it must first assume the role in the seller account. This is done by calling the sts:AssumeRole method. If successful, this call returns temporary credentials (secret/access keys). These credentials can then be used to call the Metering API.

The following code snippet shows how you can call the assume_role function in python to obtain the temporary credentials from the seller account.

import boto3

sts_client = boto3.client('sts')

assumedRoleObject = sts_client.assume_role(

credentials = assumedRoleObject['Credentials']

client = boto3.client('marketplace-metering','us-east-1', 
    aws_access_key_id = credentials['AccessKeyId'], 
    aws_session_token = credentials['SessionToken'])


Using a single AWS Account for AWS Marketplace avoids confusion and mistakes. Using cross-account roles allows you to avoid hosting production code in the AWS Account registered as a seller. For more information on SaaS Subscriptions, please visit the AWS Marketplace SaaS Subscriptions page.

Delivering Real-Time Insights to Enterprise Customers – New Relic, an AWS Migration Competency Partner

by Kate Miller | on | in APN Competency Partner, APN Technology Partners, Enterprise, Migration, re:Invent 2016, SaaS on AWS | | Comments

We launched the AWS Migration Competency in June 2016 with one simple goal in mind: to help customers connect with AWS Partners who’ve proven their expertise helping customers of all sizes successfully migrate to AWS. The principle of simplicity has driven the launch of all of our Competencies. Our mission is to make it as easy as possible for customers to find AWS Partners who’ve demonstrated deep expertise in particular solution areas, and we plan to continue to launch Competencies in different areas to reach more use cases on AWS.

New Relic is an Advanced APN Technology Partner who holds the AWS DevOps, AWS Mobile, and AWS Migration Competencies, with particular focus on application testing and monitoring. New Relic offers solutions that provide real-time end-to-end intelligence across customer experiences, application performance, and dynamic infrastructure. As an Advanced APN Partner, the company has built a mature AWS-based business, and has helped a wide range of Enterprise customers successfully take advantage of the benefits of AWS, including, Fairfax Media, FlightStats, MLBAM, and News Corp. NewRelic continues to innovate on AWS, and has recently expanded its solutions portfolio to include New Relic Infrastructure, an infrastructure monitoring solution on AWS. Learn more at

We recently caught up with the New Relic team to learn a little more about what makes them unique as an ISV, how they work with AWS, the migration trends the team is seeing within its customer base, and what’s next for the company on AWS.

Who is New Relic?

New Relic is a leading digital intelligence company that delivers full-stack visibility and analytics to more than 14,000 customers, including more than 40 percent of the Fortune 100. “Our customers include digital-first companies like AirBnB, as well as large enterprises looking to transform for digital era, including GE and MLB Advanced Media,” says John Gray, SVP of Business Development at New Relic. “New Relic helps organizations understand if their apps are up and running, how to improve their digital customer experience, and realize the promise of their digital investments.”

As an Advanced Technology Partner, New Relic works closely with AWS to ensure the performance and success of customers’ applications and infrastructure utilizing AWS services.

Why AWS?

With a deep focus on end customers and their wide range of use cases, New Relic saw the potential to build solutions for customers on AWS. “With a large customer base building cloud-native applications, New Relic recognized the market opportunity to align with companies shifting to cloud usage. AWS provides a flexible and highly scalable solution that allows us to build high availability, highly scalable services in a very timely manner. As a result, it made sense to offer an application and infrastructure performance monitoring solution on AWS,” explains Lee Atchison, Principal Cloud Architect at New Relic. “We use AWS to develop our applications, and deploy highly scalable production services.”

Innovating on AWS

A pure, multi-tenant SaaS platform, the New Relic Digital Intelligence Platform provides visibility into how customers make use of cloud-based services, such as AWS, from within the application and infrastructure running on AWS services. New Relic also provides active monitoring of Amazon Elastic Compute Cloud (Amazon EC2) servers to provide infrastructure-level monitoring and configuration management. “We offer companies the ability to increase their understanding of the behaviors of the AWS services they use, code running on AWS services, trends in digital experiences, and the business outcomes of those experiences,” says Atchison. With the recent addition of New Relic Infrastructure, it includes out of the box integrations to provide expanded native monitoring with popular AWS Services such as Amazon CloudFront, Amazon RDS, AWS Elastic Load Balancing, and more.

As the company continued to build and grow on AWS, New Relic chose to become an APN Partner to demonstrate its strong relationship with AWS to both current and prospective customers.

Customer Migrations to AWS

By helping customers identify, benchmark, and troubleshoot application performance through New Relic Application Performance Monitoring (New Relic APM), New Relic has helped a number of customers successfully migrate to AWS. Fairfax Media, a leading media company in Australia and New Zealand, used New Relic on their on-premises systems while migrating to AWS, and were able to identify potential application issues and fix them before the migration. Dedalus, a Premier AWS Consulting Partner and Systems Integrator, has used New Relic to extend monitoring beyond infrastructure to the application layer, and monitor application queries and health for over 100 of its customers. “Through New Relic APM and New Relic Infrastructure, we’re able to provide customers a seamless diagnostics experience with full stack visibility for their applications running on Amazon EC2,” says Atchison.

For customers about to undergo a large-scale cloud migration, the New Relic team recommends investing in both infrastructure and application-layer performance monitoring. “Monitoring your cloud infrastructure is important, so you don’t find yourself over-provisioned, or under-provisioned in a critical area,” explains Atchison. “However, monitoring infrastructure shouldn’t be the only avenue to monitor your performance. A single query from an application can be the true cause of why your infrastructure isn’t performing as you’d expect. Monitoring performance at the application level to catch these irregularities is also critical.”

New Relic was a launch partner for the AWS Migration Competency, and believes the Competency helps customers understand the company’s level of expertise within the space. “We believe the AWS Migration Competency assures customers and prospects that New Relic has a deep understanding of the migration process to AWS,” says John Gray, SVP of Business Development.

What’s Next?

With the recent launch of New Relic Infrastructure, which is now offered on AWS Marketplace through AWS Marketplace SaaS Subscriptions, the company is looking beyond its expertise in the application performance monitoring space to provide customers the ability to have an end-to-end view of their environment on AWS. “New Relic Infrastructure provides customer with easy dynamic instance monitoring, and the ability to efficiently understand their EC2 usage and optimize usage and cost,” explains Gray.

As an APN Partner, New Relic is working with AWS to explore new marketing opportunities such as videos, shared customer speaking experiences at AWS and New Relic events, workshops, and more. As previously mentioned, New Relic recently listed New Relic Infrastructure on AWS Marketplace and is looking to expand products available on AWS Marketplace.

Lee Atchison, Principal Cloud Architect at New Relic, held two speaking sessions at AWS re:Invent 2016. Check them out:

To learn more about New Relic, visit the company’s listing in the AWS Partner Finder.

Have You Read Our 2016 AWS Partner Solutions Architect Guest Posts?

by Kate Miller | on | in Amazon DynamoDB, Amazon ECS, APN Competency Partner, APN Partner Highlight, APN Technical Content Launch, APN Technology Partners, Automation, AWS CloudFormation, AWS Lambda, AWS Marketplace, AWS Partner Solutions Architect (SA) Guest Post, AWS Product Launch, AWS Quick Starts, Big Data, Containers, Database, DevOps on AWS, Digital Media, Docker, Financial Services, Healthcare, NAT, Networking, Red Hat, SaaS on AWS, Security, Storage | | Comments

In 2016, we hosted 38 guest posts from AWS Partner Solutions Architects (SAs), who work very closely with both Consulting and Technology Partners as they build solutions on AWS. As we kick off 2017, I want to take a look back at all of the fantastic content created by our SAs. A few key themes emerged throughout SA content in 2016, including a focus on building SaaS on AWS, DevOps and how to take advantage of particular AWS DevOps Competency Partner tools on AWS, Healthcare and Life Sciences, Networking, and AWS Quick Starts.

Partner SA Guest Posts

There’ll be plenty more to come from our SAs in 2017, and we want to hear from you. What topics would you like to see our SAs discuss on the APN Blog? What would be most helpful for you as you continue to take advantage of AWS and build your business? Tell us in the comments. We look forward to hearing from you!


Why Did Dynatrace Build a SaaS Solution on AWS?

by Kate Miller | on | in APN Partner Highlight, APN Partner Success Stories, APN Technology Partners, AWS Marketplace, DevOps on AWS, Migration, SaaS on AWS | | Comments

Dynatrace is an Advanced APN Technology Partner, and holds the AWS Migration and DevOps Competencies. The company recently began offering its cloud application performance management service directly through AWS Marketplace, as a part of the recent AWS Marketplace SaaS Subscriptions launch.

We recently caught up with John Van Siclen, CEO of Dynatrace, and Alois Reitbauer, VP, Chief Technology Strategist of Dynatrace, to learn more about why they chose to build a SaaS solution on AWS, and the value of becoming an APN Partner. Take a look:

To learn more about Dynatrace, click here.

Just Launched: Canonical Enterprise Support for Ubuntu on AWS Marketplace

by Kate Miller | on | in APN Partner Highlight, APN Technology Partners, AWS Marketplace, re:Invent 2016, SaaS on AWS | | Comments

This is a guest post from Udi Nachmany, Head of Public Cloud at Canonical. Canonical is an Advanced APN Technology Partner. 

Ubuntu has long been popular with users of AWS, due to its stability, regular cadence of releases, and scaleout-friendly usage model. Canonical, an Advanced APN Technology Partner, optimizes, builds, and regularly publishes the latest Ubuntu images to the Amazon EC2 console and AWS Marketplace, which is designed to provide an optimal Ubuntu experience for developers who are using AWS Cloud services. At AWS re:Invent 2016, Canonical will augment that experience with the added stability, security, and efficiency enterprise users require, by launching its enterprise support package for Ubuntu, Ubuntu Advantage, on AWS Marketplace.

Ubuntu Advantage Virtual Guest is designed for virtualized enterprise workloads on AWS, which use official Ubuntu images. It is the professional package of tooling, technology, and expertise from Canonical, and helps organizations around the world manage their Ubuntu deployments. Ubuntu Advantage Virtual Guest includes:

  • Access to Landscape (SaaS version), the systems management tool for using Ubuntu at scale
  • Canonical Livepatch Service, which allows you to apply critical kernel patches without rebooting on Ubuntu 16.04 LTS images using the Linux 4.4 kernel
  • Up to 24×7 telephone and web support
  • Access to the Canonical Knowledge Hub, and regular security bug fixes

The added benefits of accessing Ubuntu Advantage through the AWS Marketplace SaaS subscription model are hourly pricing rates based on the size of your actual Ubuntu usage on AWS, and centralized billing through your existing AWS Marketplace account. Ubuntu Enterprise Support is available in two tiers: Standard and Advanced.You can learn about the difference in support levels here.

At re:Invent, you will also be able to learn more about Canonical’s innovations around software operations, containers, and the Internet of Things (IoT). Nearly all Canonical technologies such as Juju, LXD, and Snaps, as well as the Canonical distribution of Kubernetes, can be used and deployed in production with your Amazon EC2 credentials today.  What’s more, these technologies are supported with professional SLAs from Canonical.

We are also actively innovating around containers with our machine container solution LXD,  which provides the density and efficiency of containers with the manageability and security of virtual machines. We are also partnering with Docker on the Cloud Native Computing Foundation (CNCF) and others around process container orchestration. All of this and much more can be deployed through Juju, our open source service modeling platform for operating complex, interlinked, dynamic software stacks known as Big Software.

Snaps are a new packaging format used to securely package software as an app, making updates and rollbacks a breeze. Canonical’s Ubuntu Core is an open source, Snap-enabled production operating system that powers virtually anything, including robots, drones, industrial IoT gateways, network equipment, digital signage, mobile base stations, and fridges.

At re:Invent 2016, we will be talking to Ubuntu users about all these innovations and more. Come visit us at booth 2341 in Hall D.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

How to Integrate Your SaaS Service with SaaS Subscriptions for AWS Marketplace

by Kate Miller | on | in AWS Marketplace, AWS Partner Solutions Architect (SA) Guest Post, SaaS on AWS | | Comments

The following is a guest post from David Aiken, Partner Solutions Architect (SA) for AWS Marketplace

The AWS Marketplace SaaS Subscription feature allows customers to find, subscribe and pay for the usage of your SaaS solution through AWS. In this post I’ll give you a quick overview describing the concepts, integration points and how to get started. You can find out more information by registering as a seller with AWS Marketplace and accessing the Seller Portal here.

The metering for the AWS Marketplace SaaS Subscription service is consumption based, meaning you would charge users for things they had done or used at an hourly rate. For example, if your SaaS service managed websites, you would set a price per website. Each hour you would report how many websites were being managed by your service. AWS would do the math and add the total to the customer’s bill for that hour.

When using the AWS Marketplace SaaS Subscription service, you need to determine the type of usage you are going to charge for. The top-level usage type is known as a “category” and can be either Hosts, Users, Data, Bandwidth, Requests or Tiers. You may select only 1 category. Within a category, you can define between 1 and 8 dimensions. Dimensions allow you to distinguish between different usage within a category, for example if the category is users, you could have a dimension for Admin users and charge $0.87/user/hour and another for Standard users charged at $0.22/user/hour.

Integrating with the SaaS Subscription Service

Once you have your category, dimensions and costs figured out, you need to integrate SaaS Subscriptions into your SaaS service. There are three integration tasks to complete:

Customer registration – When a customer subscribes to your service from AWS Marketplace, they will be redirected via an HTTP POST to your registration page. In the POST request will be a form field named x-amzn-marketplace-token. This token can be redeemed via an AWS API call to ResolveCustomer to determine the customer ID of the subscriber and the product code of the service they subscribed too. You should save this alongside any registration information as you will need it when reporting metering usage.

Report usage information – Each hour, you need to report usage information for each customer. You do this via an API call to BatchMeterUsage, sending up to 25 metering records at a time. You would send 1 record per customer per dimension. Each call would include the Customer ID, Dimension Name, Usage Quantity and UTC timestamp.

Handle Subscription events – When a customer subscribes or unsubscribes from your service, messages are sent to an SNS topic created by AWS for your service. It’s a good practice to subscribe an SQS queue to the topic, then read the messages from the queue. This way you won’t lose messages if your service is unavailable. The most important event to handle is the unsubscribe-pending. If you receive this for a customer, you will have 1 hour to report any final usage. After an hour you will receive an unsubscribe-success message, at which time no more metering records can be sent for that customer.

Figure 1: SaaS Subscriptions Integration Points


Getting Started

Before you can start development work, you will need a product to be created in AWS Marketplace. To do this, you will need to use Self-Service Listings, located within the “Listings” tab of the AWS Marketplace Management Portal (AMMP). If you are not already registered for access to AMMP, you must first sign up to access the portal.  To create a new SaaS Subscriptions product, log in to AMMP, navigate to the “Listings” tab, look for the “Create a New Product” box, and choose “SaaS Subscriptions” from the dropdown. You will then be guided through a set of web forms that will help you create your listing.

Once you have completed, reviewed, and submitted the form, the AWS Marketplace operations team will create a product listing for your service and send you the product code, along with the SNS topic. When your product is ready to review, your listing will show up with a status of “Approval Required” under the “Requests” area of Self-Service Listings. You can then click on your request to view your product code, pricing information, limited preview listing, and more. This product will remain hidden in limited preview until you have completed your development work and are ready to go public.

Seller vs Production Accounts

To list a product or SaaS service in AWS Marketplace, you need to register an AWS account to be your seller account. You can only have a single seller account, so you should consider creating a new account just for this purpose.

Calls to the AWS APIs need to be authenticated from the seller account. Rather than hosting your production code in the seller account, or embedding secret/access keys in your production code, you should consider using cross-account AWS Identity and Access Management (IAM) roles. Cross-account IAM roles will allow you to have production code in different accounts than your seller account. This is very useful when you want to maintain a security boundary, or have multiple products to list.


When building out your integration, you will need to have several AWS accounts available that you can use as test customers. Since the product listing page is hidden during your development, you will need to instruct the AWS Marketplace operations team to authorize specific AWS accounts to view your product. When you create your product in Self-Service Listings, you can identify any additional AWS accounts to authorize by entering them in the “Accounts to Whitelist” field located on the “Pricing” tab. Once an AWS account is authorized, you can use that account to subscribe to your product and perform any testing.

You may also wish to have a test product set up so you can test the subscription workflow, metering and event handling in a different environment than your production code. To create a test product, simply create and submit another SaaS product using Self-Service Listings, making sure to indicate in the title of your product that it is the test version.


Integrating with SaaS Subscriptions requires you to be registered as a seller in AWS Marketplace and have a SaaS product created. There are three integrations to complete: customer registration, reporting customer usage, and handling subscription events. For more information, please visit the AWS Marketplace SaaS Subscriptions page.