AWS Partner Network (APN) Blog

Partner SA Roundup – May 2017

by Kate Miller | on | in APN Technology Partners, AWS Partner Solutions Architect (SA) Guest Post, IoT | | Comments

Every month or so I work with our awesome Emerging Partner SA team to highlight the technical innovations we’re seeing from a range of APN Technology Partners. This month, we’re going to hear from Partner SAs Ian Scofield, Juan Villa, and Erin McGill, as well as Partner Development Rep Makda Tesfa-George, who discuss solutions from APN Technology Partners Cesanta, Fugue, Bright Wolf, and FittedCloud!

Cesanta, by Juan Villa

Writing or porting real-time operating systems for different hardware platforms can be a complicated, lengthy, and error-prone process. Cesanta, an IoT software engineering company and AWS Advanced Technology Partner, develops and supports the popular Mongoose OS, an Open Source Operating System specifically designed for the Internet of Things. Mongoose OS currently supports the ESP32, ESP8266, STM32, and TI CC3200 microcontroller platforms. These are hardware platforms by Espressif, STMicroelectronics, and Texas Instruments, who are also APN Partners.

With Mongoose OS you don’t have to set up a complex cross-compilation environment to build the OS for the target hardware platform. Mongoose OS provides a command line tool called “mos” that you can use to configure, build, and flash the OS for the target platform. Additionally, the OS has native support for the AWS IoT service, and you can configure it with the command line utility, which will automatically leverage the AWS IoT API to create a device and provision a certificate for use with the AWS IoT service. Mongoose OS has many features! I encourage you to check out their website for a detailed list.

Need help engineering an IoT software solution using Mongoose OS? Cesanta provides licenses and engineering support to help you get started with your next IoT project. For additional information on using Mongoose OS with the AWS IoT service I encourage you to check out this blog post by a fellow AWS Solutions Architect.

 

Fugue, by Erin McGill

Our APN Technology Partner Fugue continues to expand their offering in the DevOps space on AWS, for SMEs and larger enterprises. Fugue previously made an appearance on this blog because they help AWS customers to validate, build, and enforce cloud infrastructure with an automated, easy-to-integrate system.

Fugue has released its new Team Conductor to enable customers to centralize the management of multiple workloads across a number of AWS accounts. The Fugue Conductor runs inside a customer’s AWS account and automates the full lifecycle of cloud infrastructure environments. Using cross-account IAM roles, the Team Conductor only needs to be able to assume the role in the external account so no credential secrets are shared. There’s no need to deploy a Fugue Conductor in every account in which customers want to manage infrastructure. This provides a simplified and more cost-effective solution for managing workloads.

Fugue’s Team Conductor provides role-based access controls (RBAC) to facilitate secure collaboration around AWS infrastructure operations. Customers can create Fugue Conductor users to interact with AWS infrastructure via Fugue and regulate the creation and modification of cloud resources.

Fugue has also released a prototype tool designed to help you visualize AWS infrastructure. As customers expand their AWS workloads, the interactions among all the services can become more complex. With Fugue’s prototype visualization tool, customers can view how their environments are designed and how services interact with one another.

Fugue’s visualization tool is presented in three panes. The left pane column shows your file structure with sample Fugue compositions that will create your AWS infrastructure. The middle pane displays the composition that is being visualized, and the right pane provides the illustration. Customers can view their AWS infrastructure before creating or changing any resources.

Head over to https://fugue.co/ and sign up to launch a Fugue Conductor AMI in your account to start working with Fugue, and head over to https://playground.fugue.co/# to learn more about Fugue’s visualization tool.

 

ZipLine by Bright Wolf, by Makda Tesfa-George and Marc Phillips (Bright Wolf)

Enterprises are excited to take advantage of the AWS IoT Button Enterprise Program. Bright Wolf, an APN Advanced Technology and IoT Competency Partner, has released ZipLine, an instant-on visual platform for rapid deployment and configuration of multi-tenant AWS IoT Enterprise Button solutions.

ZipLine is built on Bright Wolf’s Strandz IoT data management platform and uses AWS IoT, AWS Lambda, Amazon SNS, Amazon SQS, Amazon RDS, AWS Elastic Beanstalk, Amazon S3, and other AWS Cloud services. ZipLine runs as an Amazon Machine Image (AMI) inside the enterprise customer’s account.

ZipLine makes fully contextualized button events readily available for integrated enterprise systems, using SQS to connect with existing business workflows and AWS infrastructure, which enables highly customizable solutions for AWS customers. ZipLine’s out-of-the-box, ‘no code required’ user interface for assigning and managing large fleets of buttons makes it easy to quickly deploy across different organizations and sites. Flexible event types, notification settings, and role-based access control (RBAC) enable AWS Enterprise IoT Button customers to rapidly configure and deploy multi-tenant button-based solutions.

ZipLine’s built-in dashboards and reporting include a clickable map view with customizable interior floor plan maps, historical and real-time activity monitors, and remaining useful life indicators for each button, and provide a fast track to Amazon Machine Learning and other business intelligence offerings.

For more information about Bright Wolf and ZipLine, check out the company’s website.

 

FittedCloud, by Ian Scofield               

Some of the many benefits of using AWS are agility and elasticity, being able to provision new resources when you need them, and the ability to scale to meet your needs. However, optimization is still important to ensure that you don’t under-utilize or waste resources. Examples may include provisioning large Amazon Elastic Block Store (EBS) volumes, but only using a small fraction of the total capacity. Or maybe you originally selected an m4.10xlarge instance for running your application, but it really only needs to an m4.2xlarge. These are just two examples of areas where customers can achieve significant cost savings. APN Advanced Technology Partner FittedCloud can help you examine your resources and make intelligent recommendations to identify these areas for optimization and alert customers using ‘Actionable Advisories’. More importantly, it can take automated actions on your behalf with zero or minimal impact to your applications. ‘Actionable Advisories’ support a broader set of AWS resources and allow users to click on alerts to take immediate cost saving actions. Complete automation of optimization actions driven by user defined policies or machine learning is supported for Amazon EC2, EBS, and Amazon DynamoDB.

FittedCloud’s AWS EBS Optimizer, through the use of an agent, manages your EBS storage to help ensure you only pay for the capacity you actually use. It employs machine learning algorithms to help predict usage patterns in order to closely match the amount and type of resources with your utilization. It supports all EBS volume types, including GP2, Magnetic, SC1, ST1, and IO1.  FittedCloud takes the newly released Amazon EBS Elastic Volumes and adds on features, such as the ability to decrease a volume size without user interaction.

For both Amazon DynamoDB and Amazon EC2, FittedCloud gives users the option to define specific scaling policies, or you can employ the same machine learning algorithm to make metric-driven automated actions. FittedCloud can also provide Instance Rightsizing by analyzing usage to determine the optimal instance size. Additionally, it can identify rogue or abandoned instances which can not only reduce unnecessary costs, but help mitigate security risks as well. All of these recommendations and a breakdown by resource can be seen on their dashboard, helping you see where your significant cost savings are.

To learn more about FittedCloud and their offering, head on over to their website or check out their blog!

Overview of Oracle E-Business Suite on AWS – A New Whitepaper Now Available

by Kate Miller | on | in AWS Partner Solutions Architect (SA) Guest Post, Oracle | | Comments

By Jayaraman Vellore Sampathkumar. Jayaraman is an AWS Oracle Solution Architect. 

Recently, we published a whitepaper on Oracle E-Business Suite on AWS. This whitepaper provides an architectural overview of Oracle E-Business Suite 12.2 on AWS and outlines the benefits and options for running Oracle E-Business Suite on AWS.

AWS customers and APN Partners can leverage the architectural patterns outlined in the whitepaper and can learn how to take advantage of newer AWS services like Amazon Elastic File System (Amazon EFS). The whitepaper also provides a general overview of storage options available in AWS. You can leverage different storage options for different tiers of Oracle E-Business Suite – database tier, application tier, utl_file_dir, and backups. Subsequent whitepapers in this series will cover advanced topics and outline best practices for high availability, security, scalability, performance, migration, disaster recovery, and management of Oracle E-Business Suite systems on AWS.

The complete whitepaper is available to download here.

AWS Training and Certification Portal Now Live

by AWS Training and Certifications Team | on | in APN Launches, AWS Training and Certification | | Comments

AWS Training and Certification can help APN Partners deepen AWS knowledge and skills, differentiate their business, and better serve customers. And now, the AWS Training and Certification Portal allows you to access and manage your training and certification activities, progress, and benefits – all in one place. Previously, you had to rely on multiple websites to find and manage our training and certification offerings. Now you have a central place where you can find and enroll in AWS Training, register for AWS Certification exams, track your learning progress, and access benefits based on the AWS Certifications you have achieved. This makes it easier for you to build your AWS Cloud skills and advance toward earning AWS Certification.

To get started, you can simply sign in to the Portal using your APN Portal credentials. If you had a Webassessor account for AWS Certification, you can visit the Certification tab and merge this account history with the new portal. Need help? Visit our AWS Training and Certification Portal FAQs.

Once you are set up, you can rely on the AWS Training and Certification Portal to be your place to find the latest AWS training and certification offerings, built by AWS experts.

Citrix Customers – Bring Your Own Windows Client Licenses for Use on AWS

by Andrew Kloman | on | in APN Technology Partners, AWS Partner Solutions Architect (SA) Guest Post, End User Computing | | Comments

By Andrew Kloman. Andrew is a Partner Solutions Architect (SA) at AWS. 

Citrix customers, did you know that you can bring your own Microsoft Windows client licenses and use them on Amazon Web Services (AWS)? This includes deploying Windows 10 on AWS for your Citrix XenDesktop deployments.

Within the Studio console, Citrix XenDesktop supports the desktop deployment model called “Use hardware that is dedicated to my account”. When you select the this option, Citrix XenDesktop will deploy Amazon Elastic Compute Cloud (Amazon EC2) to dedicated instances to comply with the licensing requirements of Windows 10 on AWS.

Confused about what Microsoft Licensing is required?

Windows client licenses may require Software Assurance or Virtual Desktop Access (VDA) in order to utilize Windows client operating systems such as Windows 7, 8, or 10 on AWS. We recommend that you read this Microsoft licensing brief for more information from Microsoft, and review the FAQ documents from AWS you can find here.

The chart found here outlines common licensing scenarios when you bring your own Microsoft license to AWS.

Learn More at Citrix Synergy 2017

Citrix will also be proving a hands-on lab entitled LAB615: Deploying and automating Citrix solutions with Citrix Cloud and AWS. If you are interested in learning more, please sign up for the hands-on lab and say hello! From the lab description:

Amazon and Citrix have collaborated to offer a set of best practices and cloud migration tools to help you deploy your Citrix solutions on Amazon Web Services (AWS) faster and with higher ROI. Join us to learn how to deploy XenApp and NetScaler in AWS along with best practices and lessons learned from Citrix and Amazon. We will discuss architecture designs for Citrix in AWS with the latest Citrix Cloud solutions to help simplify a deployment of Citrix into an AWS resource location. You will also learn how Citrix Smart Build blueprints make it easy to automate the deployment of your solution. Whether you need to expand an existing XenApp or XenDesktop landscape in a hybrid cloud model, migrate to a cloud-first infrastructure strategy, or upgrade to a managed service like Citrix Cloud, Citrix and AWS have a solution for you.

If you aren’t able to make it to Synergy this year and want more information about Citrix on AWS, check out the Accelerate Program for Citrix for more information.

What is the Accelerate Program for Citrix?

The Accelerate Program for Citrix enables customers to quickly adopt or migrate Citrix solutions on AWS. If you are running Citrix XenApp, XenDesktop and/or NetScaler on-premises and are interested in moving to the AWS Cloud, then this could be a really interesting offer for you!

In cooperation with Citrix (an Advanced APN Technology Partner), we have assembled an AWS Accelerator to help you to plan and execute a successful trial migration while using your existing licenses. The migration process makes use of Citrix Smart Tools. Smart Tools includes a set of proven deployment blueprints that will help you to move your existing deployment to AWS. You can also deploy the XenApp and XenDesktop Service on Citrix Cloud, and in conjunction use Smart Tools to manage your AWS-based resources.

What is the Trial Period funding and how does it work?

To provide a controlled trial or migration period to customers, AWS and Citrix are offering a customer trial package for 60 days. For each customer nominated and approved, they will receive $5,000 in AWS Promotional Credits, Citrix XenApp and/or XenDesktop software for up to 25 Citrix users for up to 60 days along with Citrix Smart Tools with AWS Smart Build Blueprints and Smart Scale auto scaling.

Questions? Contact AWS (email us) or submit a request with Citrix (registration form) and ask to join the AWS Accelerator.

 

 

AWS HIPAA Program Update – Removal of Dedicated Instance Requirement

by Aaron Friedman | on | in Amazon EC2, Amazon ECS, Amazon EMR, Amazon S3, AWS Partner Solutions Architect (SA) Guest Post, Healthcare | | Comments

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect with AWS

I love working with Healthcare Competency Partners in the AWS Partner Network (APN) as they deliver solutions that meaningfully impact lives. Whether building SaaS solutions on AWS tackling problems like electronic health records, or offering platforms designed to achieve HIPAA compliance for customers, our AWS Healthcare Competency Partners are constantly raising the bar on what it means to deliver customer-obsessed cloud-based healthcare solutions.

Our Healthcare Competency Partners who offer solutions that store, process, and transmit Protected Health Information (PHI) sign a Business Associate Addendum (BAA) with AWS. As part of the AWS HIPAA compliance program, Healthcare Competency Partners must use a set of HIPAA-eligible AWS services for portions of their applications that store, process, and transmit PHI. You can find additional technical guidance on how to configure those AWS services in our HIPAA security and compliance white paper. For any portion of your application that does not involve any PHI, you are of course able to use any of our 90+ services to deliver the best possible customer experience.

We are rapidly adding new HIPAA-eligible services under our HIPAA compliance program, and I am very excited to see how Healthcare Competency Partners are quickly adopting these new services as part of their solutions involving PHI. Today, I want to communicate a recent change to our HIPAA compliance program that should be positively received by many of our APN Partners in Healthcare and Life Sciences – APN Partners who have signed a BAA with AWS are no longer required to use Amazon EC2 Dedicated Instances and Dedicated Hosts to process PHI. APN Partners and other AWS customers should continue to take advantage of the features of VPC as they migrate from Dedicated Instances or Dedicated Hosts to default tenancy.

Over the years, we have seen tremendous growth in the use of the AWS Cloud for healthcare applications. APN Partners like Philips now store and analyze petabytes of PHI in Amazon S3, and others like ClearDATA provide platforms which align to HIPAA or HITRUST requirements for their customers to build on. Customer feedback drives 90+% of our roadmap, and when we heard many customers and APN Partners requesting this change, we listened.

Optimizing your architecture

One of our Leadership Principles at Amazon is “Invent and Simplify”. In the spirit of that leadership principle, I want to quickly describe several optimizations I anticipate APN Partners might make to simplify their architecture with the aforementioned change to the AWS HIPAA compliance program.

As always, if you have specific questions, please reach out to your Partner Manager or AWS Account Manager and they can pull in the appropriate resources to help you dive deeper into your optimizations.

Optimizing compute for cost and performance

With default tenancy on EC2, you can now use all currently available EC2 instance types for architecting applications to store, process, and transmit PHI. This means that you can leverage Spot instances for all instance types, such as for batch workloads, as well as use our burstable compute t2 family of EC2 instances in your applications, rather than using the m3 or m4 instance family.  You should continue to take advantage of the features of VPC as you migrate from Dedicated Instances or Dedicated Hosts to default tenancy.

Right-sizing for Microservices

Many of our Healthcare Competency Partners, especially those who build SaaS applications, use microservices architectures. They often use Amazon ECS for Docker container orchestration, which runs on top of Amazon EC2. The ability to use default tenancy EC2 instances for PHI will enable you to further right-size your applications by not having to factor in Dedicated Instances or Dedicated Hosts.

Simplifying your big data applications

Amazon EMR is a HIPAA-eligible service that many Healthcare Competency Partners use to analyze large datasets containing PHI. When using dedicated tenancy, these Partners needed to launch EMR clusters in VPCs with dedicated tenancy. This is how an architecture might look using dedicated tenancy, where the left side is a VPC with dedicated tenancy interacting with an Amazon S3 bucket containing PHI.

With the new update, you can logically consolidate these two VPCs into a single default tenancy VPC, which can simplify your architecture by removing components such as VPC peering and ensuring that your CIDR blocks didn’t overlap between VPCs.

Partner segregation by account rather than VPC

Many of our Healthcare Competency Partners, especially managed services providers (MSPs), prefer to segregate their customers or applications into different accounts for the purposes of cost allocation and compute/storage segregation. With the removal of the requirement Dedicated Instances or Dedicated Hosts, you can more easily segregate customers and applications into the appropriate accounts.

Conclusion

For more information on HIPAA on AWS, please see this blog post by our APN Compliance Program Leader, Chris Whalley, as well as check out our HIPAA in the Cloud page.

If you have any questions, please feel free to reach out to your AWS Account Manager or Partner Manager, and they can help direct you to the appropriate AWS resources. You can also email apn-blog@amazon.com and we will route your questions to the appropriate individuals.

Learn about the SAP and Amazon Web Services Collaboration for Life Sciences Customers

by Kate Miller | on | in APN Technology Partners, AWS for SAP, Life Sciences | | Comments

By Christopher Chen. Chris is the Global Strategic Technology Alliances Lead for HCLS at AWS.

This week, SAP is holding their annual user conference, SAPPHIRE NOW, and we’re excited to announce a new collaboration between AWS and SAP that will support our mutual Life Sciences customers on their journey to the cloud.

SAP has been a leader in providing solutions to the Life Sciences market for over 45 years, and many of our customers and APN Partners have been moving to HANA workloads on AWS. Life Sciences customers often work in highly regulated environments, and through our collaboration with SAP, we aim to help their businesses run better by enabling this transition with new tools.

What are some unique considerations for Life Sciences customers?

As explained by Chris Whalley, AWS Partner Network (APN) Compliance Program Leader, in an earlier blog post detailing GxP (Good [anything] Practices) on AWS, we‘re deeply invested in helping enable customers to run GxP workloads on AWS. In January 2016, we published a whitepaper titled, Considerations for Using AWS Products in GxP Systems,” with the assistance of Lachman Consultant Services, Inc. (Lachman Consultants), one of the most highly respected consulting firms on FDA and international regulatory compliance issues affecting the pharmaceutical and medical device industry today.

Initially, we’ve found that many IT departments have questions about how they can meet their compliance needs and achieve the same level of control in a cloud environment that they do in an on-premises model. By being able to treat your infrastructure as code on AWS, you can apply the same level of control to your infrastructure as you do to your software, and easily test and validate each change to your environment. You can take advantage of AWS to automate the traceability and audit process so that you can not only meet your compliance requirements but also drive continuous compliance as you increase agility and lower your operational burden.

One of our Life Sciences customers, Moderna Therapeutics, is building a digital company with full integration in a GxP environment. According to Marcello Damiani, Chief Digital Officer at Moderna, “We worked with SAP and AWS because we wanted to take advantage of the flexibility that AWS provides and, at the same time, ensure compliance with all the applicable regulations of the biopharma industry. In the GxP space, it was very helpful that AWS had the industry experience to help guide us and work with our quality team to understand what this all meant in the cloud.”

Our goal is to help enable and accelerate the journey of digital transformation for Life Sciences customers as they adopt SAP’s cloud-enabled innovations. Customers can leverage the tools and resources that SAP and AWS make available to help meet their compliance objectives with increased speed and agility while supporting mission-critical applications like SAP S/4HANA and SAP BW/4HANA.

Easing the process of getting HANA environments up and running on AWS

I’m excited to announce that we will soon make available guidance and documentation for the validated deployment of SAP HANA qualified for GxP. This documentation will be built off our previous work with SAP in developing the “SAP HANA on the AWS Cloud” Quick Start, which allows you to scale out HANA to 34TB. It launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability required while operating in a validated environment for Life Sciences customers. This is done, in as little as an hour, by deploying a collection of AWS CloudFormation templates and Amazon Machine Images (AMIs) which configures SAP HANA. The templates included in the documentation enable all auditing and logging features available and are coupled with an IQ (Infrastructure Qualification) document in order to enable a qualified environment once deployed.

Transitioning more quickly to HANA

In addition we will also help existing SAP Life Sciences customers quickly transition to HANA with a rapid test migration program for non-HANA Suite or B/W workloads (FAST). Now, instead of taking months to migrate workloads to HANA, our Life Sciences customers can be up and running with HANA on AWS with minimal effort and without long term commitments. Our first customers have already achieved success with this program, and have had test environments up and running in less than a week’s time.

Minding your IQ’s, OQ’s, and PQ’s – qualification of SAP HANA on AWS

Validation is the process of establishing documentary evidence demonstrating that a procedure, process, or activity carried out in testing and then production maintains the desired level of compliance at all stages. To achieve and maintain a validated environment, our customers monitor the activity of their qualified systems and leverage many of the controls and services AWS provides. Most relevant to our joint customers are the required documentation around Installation qualification (IQ), Operational qualification (OQ), and Performance qualification (PQ). We know how important this is and so to simplify the process, we are working with SAP to share our templates for these qualification requirements. You will be able to use the IQ Template from AWS in combination with the OQ and PQ templates available directly from SAP and our ecosystem of SI Partners.

Joe Miles, Global Vice President of Life Sciences at SAP states, “Life Sciences companies are aggressively moving to the cloud to reduce cost, complexity, and risk of their – increasingly digital – business. SAP chose to collaborate with Amazon Web Services in Life Sciences because of its deep industry experience in managing regulated, GxP datasets. More importantly, AWS provides an exceptional ability to co-innovate with SAP. Programs like FAST are the first of many solutions that will come out of the SAP – AWS collaboration and we look forward to seeing more of those innovations in the future.”

We are excited to help our customers continue to get their mission critical SAP solutions to HANA on AWS faster, easier, and with an even better path to help them meet their specific compliance requirements. If you want to learn more about running SAP on AWS please join us for an upcoming session at SAPPHIRE NOW 2017 on Wednesday, May 17th 3:00 pm at Booth 539!

Read on for session details.

Cloud Compliance for Life Sciences in the Digital Age: Interactive Session

Life sciences organizations running regulated workloads in the cloud can achieve continuous compliance with the mandates of auditors and regulation entities. Hear real-world use cases of how heavily regulated environments maintain governance and control and on innovative Life Sciences customers journey in implementing SAP on AWS. Gain insights into some of the AWS services customers can use to accomplish continuous compliance in the transition to a digital world and move from point-in-space testing of their environment to near real-time testing. Finally, find out how the SAP collaboration with AWS in Life Sciences can speed your transition and deliver mission critical applications while accelerating value for your organization. 

Join our panel of experts including Ander Tallett – Associate Director Business Systems at Moderna Therapeutics, Joe Miles – SAP’s Global Vice President of Life Sciences, and Chris Whalley – AWS’s Industry Compliance Lead as they provide insights and answer questions on this important topic.

 

Why Use AWS Lambda in a Custom VPC?

by Akash Jain | on | in AWS Lambda, AWS Partner Solutions Architect (SA) Guest Post, Cloud Managed Services | | Comments

By Akash Jain. Akash is a Partner Solutions Architect (SA) at AWS. 

As a Partner Solutions Architect (SA), I work closely with APN Partners as they look to use AWS services in innovative ways to address their customers’ use cases. Recently, I came across an interesting use case with an APN Partner who configured an AWS Lambda function to access resources running in a custom virtual private cloud (VPC) to call an internal API over the virtual private network (VPN). In this post, I’ll walk through how this APN Partner is evolving an existing architecture to take advantage of AWS Lambda’s capabilities and to optimize the managed services they provide to their customers.

Existing architecture

For those who are new to AWS Lambda, it is a compute service that lets you run code (written up as “Lambda functions”) without provisioning or managing servers. These functions execute in response to a wide variety of AWS service events and custom events, and can be used in a variety of scenarios. AWS Lambda executes your code only when needed and scales automatically from a few requests per day to thousands per second. With AWS Lambda, you pay only for the requests served and the compute time required to run your code.

The particular use case I’ll discuss today involves an APN Partner who needed to integrate a customer’s AWS environment with their own on-premises environment to provide managed services. Several VPN connections were set up between their on-premises environment and different AWS Regions. As a part of the integration, all system alerts in a region needed to be consolidated in one place. In this case, that was the APN Partner’s on-premises environment.

To make this happen, they set up Amazon CloudWatch alerts to trigger a Lambda function. The job of the Lambda function was to call an externally hosted web service and pass the alert as payload. The web service could then convert the CloudWatch alerts to a format that the Netcool API, which was hosted on premises, could understand.

The following diagram outlines the setup of this architecture. For simplicity’s sake, I’ve chosen not to represent components like subnets, customer gateway, and VPN gateway.

After we reviewed this architecture with the APN Partner, they chose to re-evaluate and optimize it, for a few reasons:

  • Extra cost – A dedicated system (VM) was in place to host the web service, and its job was to convert the message from the CloudWatch alert format to the Netcool API format. Getting the VM, OS, and other software in place required an upfront cost.
  • Maintenance – Managing and maintaining this server added an extra layer of maintenance. The team had to patch the server regularly to keep it up-to-date.
  • Security complexity – The API for converting the format was exposed externally, so it resulted in an additional security layer for authentication, authorization, and DoS/DDoS protection.
  • Low scalability – The web service could not auto-scale because it was installed on a single VM.
  • Fault tolerance – If the server went down, all the alerts would be lost.

Accessing resources in a VPC from a Lambda function

Working with the APN Partner, we decided to take advantage of AWS Lambda in a way that would alleviate these concerns with the existing architecture. We asked ourselves two questions: “What if we move the format conversion logic of the web service into AWS Lambda itself?” and then, “How can the modified Lambda function call the Netcool API, which is not exposed externally?”

The answer is to access resources in a VPC from an AWS Lambda function, a helpful feature that was introduced by AWS in early 2016. With this feature, the AWS Lambda function can call the Netcool API over the existing VPN connection, which was established for secure administrative access. When the Lambda function accesses resources in a VPC, it gets a private IP address from the VPC. It can then communicate with any service within the VPC or with any other system accessible from that VPC.

The benefits of this approach are:

  • Manageability – The Lambda function automatically runs your code without requiring you to provision or manage servers. This means that your support team can focus on important system alerts instead of managing and maintaining the infrastructure around it.
  • Minimized cost – You pay for what you use. For Lambda, you’re charged based on how many requests the Lambda function receives and how long your code executes. Since we’re working with system alerts in this scenario, I don’t expect the Lambda function to be more expensive than monthly charges for running a server.
  • Security – Because the Lambda function is VPC-enabled, all communications between AWS and the on-premises environment will be over a secure tunnel.
  • High scalability – Lambda can launch as many copies of the function as needed to scale to the rate of incoming events.
  • Fault tolerance – Lambda can be a part of multiple subnets spanning multiple Availability Zones.

Lambda functions automatically scale based on the number of events they process. For VPC-enabled Lambda functions, you should make sure that your subnet has enough elastic network interfaces (ENIs) and IP addresses to meet the demand. For details on calculating ENI capacity, see the AWS Lambda documentation.

A Lambda function enabled to access resources within a VPC may take some time to instantiate, because an ENI needs to be initialized and attached to it. In the case of inactivity over a long period of time, the alerts may take some time to process. A workaround I suggest is to keep the Lambda function warm by triggering dummy CloudWatch alerts, as explained in a post on the A Cloud Guru blog.

Driving managed service optimization

As the APN Partner in this example is a Managed Service Provider (MSP), I’d like to tie this example back to how next-gen MSPs can drive workload optimizations and cost efficiencies for their customers.

Service delivery quality is a key value next-gen MSPs bring to customers. An essential goal for MSPs is to try to develop the tools, processes, and governance required to deliver reliable services cost-effectively. By accessing resources in a custom VPC from a Lambda function and leveraging an existing VPN connection, an MSP can send alerts more securely, reliably, and cost-effectively.

Conclusion

In this post, we briefly discussed the benefits of running Lambda functions that can access resources inside your private VPC. Your Lambda functions can access Amazon Redshift data warehouses, Amazon ElastiCache clusters, Amazon Relational Database Service (Amazon RDS) instances, and service endpoints that are accessible only from within a particular VPC, such as resources over VPN. I recommend that you keep sufficient ENI and IP addresses under a subnet for auto-scaling purposes, and keep your Lambda function warm if you need a quicker response in case of longer inactivity.

For more information, I recommend that you take a look at the AWS documentation. Do you have comments? Talk to me in the comments section. I’d love to hear from you.

Lessons Learned from Working with Amazon Web Services – A Post by Infor

by Kate Miller | on | in All-in with AWS, APN Competency Partner, APN Technology Partners, AWS Competencies | | Comments

Three years ago, Infor – one of the world’s leading providers of enterprise applications and an AWS Partner Network (APN) Advanced Technology Partner – announced they were going “all in” on AWS. It has been truly exciting to see their progress as they migrate their business and their enterprise customers to the cloud. From using new EBS volume types to reduce database backup costs by 75%, to leveraging serverless computing to automate testing, they continue to innovate, leveraging new AWS services and features to enhance their products and increase efficiencies. Building on AWS has enabled Infor to concentrate on its core competency of developing innovative industry-specific applications for enterprise customers, such as Fuller’s, HellermannTyton, and Confluence Health, rather than on managing the company’s underlying infrastructure. As highlighted in this recent blog post written by Infor Lab’s SVP Brian Rose, the bet on AWS continues to pay off for Infor.

For more information on Infor, see the “Friends Don’t Let Friends Build Data Centers” APN Blog.

Read “Lessons Learned from Working with Amazon Web Services” >>

Testing SaaS Solutions on AWS

by Tod Golding | on | in AWS Partner Solutions Architect (SA) Guest Post, How-to Guide, SaaS on AWS | | Comments

Tod Golding is a Partner Solutions Architect (SA) at AWS. He is focused on SaaS. 

The move to a software as a service (SaaS) delivery model is often motivated by a fundamental need for greater agility and customer responsiveness. SaaS providers often succeed and thrive based on their ability to rapidly release new features without compromising the stability of their solutions. Achieving this level of agility starts with a commitment to building a robust DevOps pipeline that includes a rich collection of automated tests. For SaaS providers, these automated tests are at the core of their ability to effectively assess the complex dimensions of multi-tenant load, performance, and security.

In this blog post, we’ll highlight the areas where SaaS can influence your approach to testing on AWS. In some cases, SaaS will simply extend your existing testing models (load, performance, and so on). In other cases, the multi-tenant nature of SaaS will introduce new considerations that will require new types of tests that exercise the SaaS-specific dimensions of your solution. The sections that follow examine each of these areas and provide insights into how expanding the scope of your tests can add value to SaaS environments.

SaaS Load/Performance Testing

In a multi-tenant universe, your tests go beyond simply ensuring that your system is healthy—tests must also assure that your system can effectively respond to unexpected variations in tenant activity that are commonly associated with SaaS systems. Your tests must be able to verify that your application’s scaling policies can respond to the continually changing peaks and valleys of resource consumption associated with SaaS environments. The reality is, the unpredictability of SaaS loads combined with the potential for cross-tenant performance degradation makes the bar for SaaS load and performance testing much higher. Customers will certainly be unhappy if their system’s performance is periodically affected by the activities of other tenants.

For SaaS, then, the scope of testing reaches beyond performance. It’s about building a suite of tests that can effectively model and evaluate how your system will respond to the expected and the unexpected. In addition to ensuring that customers have a positive experience, your tests must also consider how cost efficiently it is achieving scale. If you are over-allocating resources in response to activity, you’re likely impacting the bottom line for the business.

The following diagram represents an idealized representation of how SaaS organizations prefer to model the connection between load and resource consumption. Here, you see actual tenant consumption in blue and the allocated resources in red. In this model, you’ll notice that the application’s resources are allocated and deallocated in lockstep with tenant activity. This is every SaaS architect’s dream. Here, each tenant has a positive experience without over-committing any resources.

The patterns in this chart represent a snapshot of time on a given day. Tomorrow’s view of this same snapshot could look very different. New tenants may have signed up that are pushing the load in entirely new ways. This means your tests must consider the spectrum of load profiles to verify that changes in tenant makeup and application usage won’t somehow break your scaling policies.

Given this consumption goal and the variability of tenant activity, you’ll need to think about how your tests can evaluate your system’s ability to meet these objectives. The following list identifies some specific areas where you might augment your load and performance testing strategy in a SaaS environment:

  • Cross-tenant impact tests – Create tests that simulate scenarios where a subset of your tenants place a disproportionate load on your system. The goal here is to determine how the system responds when load is not distributed evenly among tenants, and assess how this may affect overall tenant experience. If your system is decomposed into separately scalable services, you’ll want to create tests that validate the scaling policies for each service to ensure that they’re scaling on the right criteria.
  • Tenant consumption tests – Create a range of load profiles (e.g., flat, spikey, random) that track both resource and tenant activity metrics, and determine the delta between consumption and tenant activity. You can ultimately use this delta as part of a monitoring policy that could identify suboptimal resource consumption. You can also use this data with other testing data to see if you’ve sized your instances correctly, have IOPS configured correctly, and are optimizing your AWS footprint.
  • Tenant workflow tests – Use these tests to assess how the different workflows of your SaaS application respond to load in a multi-tenant context. The idea is to pick well-known workflows of your solution, and concentrate load on those workflows with multiple tenants to determine if these workflows create bottlenecks or over-allocation of resources in a multi-tenant setting.
  • Tenant onboarding tests – As tenants sign up for your system, you want to be sure they have a positive experience and that your onboarding flow is resilient, scalable, and efficient. This is especially true if your SaaS solution provisions infrastructure during the onboarding process. You’ll want to determine that a spike in activity doesn’t overwhelm the onboarding process. This is also an area where you may have dependencies on third-party integrations (billing, for example). You’ll likely want to validate that these integrations can support their SLAs. In some cases, you may implement fallback strategies to handle potential outage for these integrations. In these cases, you’ll want to introduce tests that verify that these fault tolerance mechanisms are performing as expected.
  • API throttling tests – The idea of API throttling is not unique to SaaS solutions. In general, any API you publish should include the notion of throttling. With SaaS, you also need to consider how tenants at different tiers can impose load via your API. A tenant in a free tier, for example, may not be allowed to impose the same load as a tenant in the gold tier. The main goal here is to verify that the throttling policies associated with each tier are being successfully applied and enforced.
  • Data distribution tests – In most cases, SaaS tenant data will not be uniformly distributed. These variations in a tenant’s data profile can create an imbalance in your overall data footprint, and may affect both the performance and cost of your solution. To offset this dynamic, SaaS teams will typically introduce sharding policies that account for and manage these variations. Sharding policies are essential to the performance and cost profile of your solution, and, as such, they represent a prime candidate for testing. Data distribution tests allow you to verify that the sharding policies you’ve adopted will successfully distribute the different patterns of tenant data that your system may encounter. Having these tests in place early may help you avoid the high cost of migrating to a new partitioning model after you’ve already stored significant amounts of customer data.

As you can see, this test list is focused on ensuring that your SaaS solution will be able to handle load in a multi-tenant context. Load for SaaS is often unpredictable, and you will find that these tests often represent your best opportunity to uncover key load and performance issues before they impact one or all of your tenants. In some cases, these tests may also surface new points of inflection that may merit inclusion in the operational view of your system.

Tenant Isolation Testing

SaaS customers expect that every measure will be taken to ensure that their environments are secured and inaccessible by other tenants. To support this requirement, SaaS providers build in a number of policies and mechanisms to secure each tenant’s data and infrastructure. Introducing tests that continually validate the enforcement of these policies is essential to any SaaS provider.

Naturally, your isolation testing strategy will be shaped heavily by how you’ve partitioned your tenant infrastructure. Some SaaS environments run each tenant in their own isolated infrastructure while others run in a fully shared model. The mechanisms and strategies you use to validate your tenant isolation will vary based on the model you’ve adopted.

The introduction of IAM policies provides an added layer of security to your SaaS solution. At the same time, it can add a bit of complexity to your testing model. It’s often difficult to find natural mechanisms to validate that your policies are performing as expected. This is typically addressed through the introduction of test scripts and API calls that attempt to access tenant resources with specific emphasis on simulating attempts to cross-tenant boundaries.

The following diagram provides one example of this model in action. It depicts a set of resources (Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon DynamoDB items, and Amazon Simple Storage Service (Amazon S3) buckets) that belong to two tenants. To enforce isolation of these tenant resources, this solution introduces separate IAM policies that will scope and limit access to each resource.

With these policies in place, your tests must now validate the policies. Imagine, for example, that a new feature introduces a dependency on a new AWS resource. When introducing this new resource, the team happens to overlook the need to create the corresponding IAM policies to prevent cross-tenant access to that resource. Now, with good tests in place, you should be able to detect this violation. Without these tests, you have no way of knowing that your tenant isolation model is being accurately applied.

As part of isolation testing, you may also want to introduce tests that validate the scope and access of specific application and management roles. For example, SaaS providers often have separate management consoles that have varying levels of access to tenant data. You’ll want to be sure to use tests that verify that the access levels of these roles match the scoping policies for each role.

Tenant Lifecycle Testing

The management of SaaS tenants requires you to consider the full lifecycle of events that may be part of a tenant’s experience. The following diagram provides a sampling of events that are often part of the overall tenant lifecycle.

The left side of this diagram shows the actions that tenants might take, and the right side shows some of the operations that a SaaS provider’s account management team might perform in response to those tenant actions.

The tests you would introduce here would validate that the system is correctly applying the policies of the new state as tenants go through each transition. If, for example, a tenant account is suspended or deactivated, you may have policies that determine how long data is retained for that tenant. These policies may also vary based on the tier of the tenant. Your tests would need to verify that these policies are working as expected.

A tenant’s ability to change tiers also represents a good candidate for testing, because a change in tiers would also change a tenant’s ability to access features or additional resources. You’ll also want to consider the user experience for tier changes. Does the tenant need to log out and start a new session before their tier change is recognized? All of these policies represent areas that should be covered by your tier tests.

Tier Boundary Testing

SaaS solutions are typically offered in a tier-based model where SaaS providers may limit access to features, the number of users, the size of data, and so on based on the plan a tenant has selected. The system will then meter consumption and apply policies to control the experience of each tenant.

This tiering scheme is a good candidate for testing in SaaS environments. SaaS teams should create tests that validate that the boundaries of each tier are being enforced. This typically requires simulating configuration and consumption patterns that will exceed the boundary of a tier and validating that the policies associated with that boundary are correctly triggered. The policies could include everything from limiting access to sending notifications.

Fault Tolerance Testing

Fault tolerance is a general area of concern for all solutions. It’s also an area that is addressed in depth by the industry with solid guidance, frameworks, and tools. The bar for fault tolerance in SaaS applications is very high. If your customers are running on shared infrastructure and that environment is plagued by availability problems, these problems will be visible to your entire population of customers. Naturally, this can directly impact your success as a SaaS provider.

It’s beyond the scope of this blog post to dig into the various strategies for achieving better fault tolerance, but we recommend that you add this to the list of testing areas for your SaaS environment. SaaS providers should invest heavily in adopting strategies that can limit or control the scope of outages and introduce tests that validate that these mechanisms are performing as expected.

Using Cloud Constructs

Much of the testing that we’ve outlined here is made simpler and more cost effective on AWS. With AWS, you can easily spin up environments and simulate loads against those environments. This allows you to introduce tests that mimic the various flavors of load and performance you can expect in your SaaS environments. Then, when you’re done, you can tear these environments down just as quickly as you created them.

Testing with a Multi-Tenant Mindset

SaaS multi-tenancy brings with it a new set of load, performance, isolation, and agility considerations—each of which adds new dimensions to your testing mindset. This blog post provided a sampling of considerations that might shape your approach to testing in a SaaS environment. Fortunately, testing SaaS solutions is a continually evolving area with a rich collection of AWS and partner tools. These tools can support your efforts to build a robust testing strategy that enhances the experience of your customers while still allowing you to optimize the consumption of your solution.

Exciting News for Red Hat OpenShift Customers on AWS

by Kate Miller | on | in APN Technology Partners, Containers, Red Hat | | Comments

Yesterday, Red Hat and Amazon Web Services (AWS) announced an extended strategic alliance to natively integrate access to AWS services into Red Hat OpenShift Container Platform. With this new offering, Red Hat OpenShift customers will be able to seamlessly configure, deploy, and scale AWS services like Amazon RDS, Amazon Aurora, Amazon Athena, Amazon Route 53, and AWS Elastic Load Balancing directly within the Red Hat OpenShift console. Yesterday, Red Hat and AWS demonstrated these integrations at Red Hat Summit 2017 in Boston, MA. You can view the demo below:

The Open Service Broker API provides a standardized interface for applications to interact with external, 3rd-party services. The native integration of AWS Service Brokers in Red Hat OpenShift makes it easier for Red Hat OpenShift customers to easily build applications and consume AWS services and features using simple, easy to integrate options directly from the OpenShift Container Platform web console or the OpenShift CLI.

To learn more about this announcement, read the press release here, and check out Red Hat’s blog, “AWS and Red Hat — Digging a Little Deeper“.

Learn more about Red Hat on AWS by visiting our Red Hat page