Category: Amazon EC2


Partner SA Roundup – July 2017

This month, Juan Villa, Pratap Ramamurthy, and Roy Rodan from the Emerging Partner SA team highlight a few of the partners they work with. They’ll be exploring Microchip, Domino, and Cohesive Networks.

Microchip Zero Touch Secure Provisioning Kit, by Juan Villa

AWS IoT is a managed cloud platform that enables connected devices to easily and securely interact with cloud applications and other devices. In order for devices to interact with AWS IoT via the Message Queue Telemetry Transport (MQTT) protocol, they must first authenticate using Transport Layer Security (TLS) mutual authentication. This process involves the use of X.509 certificates on both the devices and in AWS IoT. A single certificate contains a private and a public key component. An IoT device needs to store the private key that corresponds to its certificate in order to establish a TLS connection with mutual authentication.

Private keys can be somewhat difficult to store securely on IoT devices. It’s easy to simply store data on a device’s local memory, but this is not enough to protect the key from tampering. It’s quite easy, and affordable, to purchase the necessary hardware to read the content of the memory from most microcontrollers and memory components used on IoT devices. This means that private keys used for authentication and establishing trust need to be stored in a secure manner.

This is where a secure element chip comes in! Microchip, an APN Advanced Technology Partner, is a silicon manufacturer that makes a secure element chip called the ATECC508A. This chip has a hardware-based secure key storage mechanism that is tamper-proof. In fact, once a key is stored in the ATECC508A, its contents cannot be read. The chip accomplishes this with hardware-based cryptographic acceleration features that allow it to perform cryptographic operations very quickly and with power efficiency. When considering ATECC508A for your product, keep in mind that Microchip can preload certificates on the secure element during manufacturing, before delivery. Combining this feature with AWS IoT’s support for custom certificate authorities and just-in-time registration can streamline device provisioning and security.

To make this secure element chip easy for you to try out, Microchip makes an evaluation kit called the Zero Touch Secure Provisioning Kit. This kit includes a SAM G55 Cortex-M4 microcontroller, the ATECC508A secure element, and an ATWINC1500 power-efficient 802.11 b/g/n module, and comes with instructions on how to get started with AWS IoT. With this combination of silicon products you can begin testing and developing your next IoT product in a secure fashion.

Before you work on your next IoT project, I recommend that you consider a secure element in your design. For more information on ATECC508A, please read the datasheet on the Microchip website.

 

Domino Data Science Platform, by Pratap Ramamurthy

Machine learning, artificial intelligence, and predictive analytics are all data science techniques. Data scientists analyze data, search for insights that can be extracted, and build predictive models to solve business problems. To help data scientists with these tasks, a new set of tools, like Jupyter notebooks, as well as a wide variety of software packages ranging from deep learning neural network frameworks, like MXNet, to CUDA drivers, are becoming popular. Data science as a field is growing rapidly as companies increase their reliance on these new technologies.

However, supporting a team of data scientists can be challenging. They need access to different tools and software packages, as well as a variety of servers connected to the cloud. They want to collaborate by sharing projects, not just code or results. They want to be able to publish models with minimal friction. While data scientists want flexibility, companies need to ensure security and compliance. Companies also need to understand resource how resources like data and compute power are being used.

Domino, an APN Advanced Technology Partner, solves these challenges by providing a convenient platform for data scientists to spin up interactive workspaces using the tools that they already know and love e.g., Jupyter, RStudio, Zeppelin, as well as commercial languages like SAS and Matlab, as seen in the diagram below.

Image used with permission

In the Domino platform, users can run experiments on a wide variety of instances that mirror the latest Amazon EC2 options provided by AWS, as seen in the screenshot. Customers can run a notebook on instances with up to 2 TB of RAM with the AWS X1 instance family. If more computational power is needed, you can switch the same notebook to GPU instances as necessary or connect to a Spark cluster.

Because the software used for data science and machine learning has several layers, and new software technologies are introduced and adopted rapidly, the data science environment is often difficult to deploy and manage. Domino solves this problem by storing the notebooks, along with the software dependencies, inside a Docker image.  This allows the same code to be rerun consistently in the future. There is no need to manually reconstruct the software, and this saves valuable time for data scientists.

Domino helps data scientists share and collaborate. They have introduced the software development concepts of code sharing, peer review, and discussions seamlessly into the data science platform.

For companies that have not yet started their cloud migration, Domino on AWS makes data science an excellent first project. Domino runs entirely on AWS and integrates well into many AWS services. Customers who have stored large amounts of data in Amazon S3 can easily access it from within Domino. After training their models by using this data, they can easily deploy their machine learning model into AWS with a click of a button, and within minutes access it using an API. All of these features help data scientists focus on data science and not the underlying platform.

Today, Domino Data Science Platform is available as a SaaS offering at the Domino website. Additionally, if you prefer to run the Domino software in your own virtual private cloud (VPC), you can install the supporting software by using an AWS CloudFormation template that will be provided to you. If you prefer a dedicated VPC setting, Domino also offers a managed service offering, which runs Data Science Platform in a separate VPC. Before considering those options, get a quick feel for the platform by signing up for a free trial.

 

Cohesive Networks, by Roy Rodan

Many AWS customers have a hybrid network topology where part of their infrastructure is on premises and part is within the AWS Cloud. Most IT experts and developers aren’t concerned with where the infrastructure resides—all they want is easy access to all their resources, remote or local, from their local networks.

So how do you manage all these networks as a single distributed network in a secure fashion? The configuration and maintenance of such a complex environment can be challenging.

Cohesive Networks, an APN Advanced Technology Partner, has a product called VNS3:vpn, which helps alleviate some of these challenges. The VNS3 product family helps you build and manage a secure, highly available, and self-healing network between multiple regions, cloud providers, and/or physical data centers. VNS3:vpn is available as an Amazon Machine Image (AMI) on the AWS Marketplace, and can be deployed on an Amazon EC2 instance inside your VPCs.

One of the interesting features of VNS3 is its ability to create meshed connectivity between multiple locations and run an overlay network on top. This effectively creates a single distributed network across locations by peering several remote VNS3 controllers.

Here is an example of a network architecture that uses VNS3 for peering:

The VNS3 controllers act as six machines in one, to address all your network needs:

  • Router
  • Switch
  • SSL/IPsec VPN concentrator
  • Firewall
  • Protocol redistributor
  • Extensible network functions virtualization (NFV)

The setup process is straightforward and well-documented with both how-to videos and detailed configuration guides.

Cohesive Networks also provides a web-based monitoring and management system called VNS3:ms in a separate server, where you can update your network topology, fail over between VNS3 controllers, and monitor your network and instances’ performance.

See the  VNS3 family offerings from Cohesive Networks in AWS Marketplace, and start building your secured, cross-connected network.  Also, be sure to head over to the Cohesive Networks website to learn more about the VNS3 product family.

Why Next-Generation MSPs Need Next-Generation Monitoring

We wrote a couple of months ago about how ISVs are rapidly evolving their capabilities and products to meet the growing needs of next generation Managed Service Providers (MSPs), and we heard from Cloud Health Technologies about how they are Enabling Next-Generation MSPs with cloud management tools that span the breadth of customer engagements from Plan & Design to Build & Migrate to Run & Operate and to Optimize. Today we are sharing a guest post from APN Advanced Technology and SaaS Partner, Datadog, as they address the shift from traditional to next gen monitoring and how these capabilities elevate the level of value that an MSP can deliver to their customers.

Let’s hear from Emily Chang, Technical Author at Datadog.

Why Next-Generation MSPs Need Next-Generation Monitoring

To stay competitive in today’s ever-changing IT landscape, managed service providers (MSPs) need to demonstrate that they can consistently deliver high-performance solutions for their customers. Rising to that challenge is nearly impossible without the help of a comprehensive monitoring platform that provides insights into customers’ complex environments.

Many next-generation MSPs team with Datadog to gain insights into their customers’ cloud-based infrastructure and applications. In this article, we’ll highlight a few of the ways that MSPs use Datadog’s monitoring and alerting capabilities to proactively manage their customers’ increasingly dynamic and elastic workloads with

  • Full visibility into rapidly scaling infrastructure and applications.
  • Alerting that automatically detects abnormal changes.
  • Analysis of historical data to gain insights and develop new solutions.
  • Continuous compliance in an era of infrastructure-as-code.

Full visibility into rapidly scaling infrastructure and applications

As companies continuously test and deploy new features and applications, MSPs need to be prepared to monitor just about any type of environment and technology at a moment’s notice. Whether their customers are running containers, VMs, bare-metal servers, or all of the above, Datadog provides visibility into all of these components in one place.

Datadog’s integration for Amazon Web Services (AWS) automatically collects default and custom Amazon CloudWatch metrics from dozens of AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing, and Amazon Relational Database Service (Amazon RDS). In total, Datadog offers more than 200 turn-key integrations with popular infrastructure technologies. Many integrations include default dashboards that display key health and performance metrics, such as the AWS overview dashboard shown below.

 

 

MSPs need the ability to monitor every dimension of their customers’ modern applications—as well as their underlying infrastructure. As customers continuously deploy new features and applications in the cloud, MSPs can consult a global overview of the infrastructure with Datadog, and then drill down into application-level issues with Application Performance Monitoring (APM), without needing to switch contexts. Datadog APM traces individual requests across common libraries and frameworks, and enables users to identify and investigate bottlenecks and errors by digging into interactive flame graphs like this one:

 

Infrastructure-aware APM gives MSPs full-stack observability for their customers’ applications, which is critical for troubleshooting bottlenecks in complex environments.

Alerting that automatically detects abnormal changes

Because today’s dynamic cloud environments are constantly in a state of flux, MSPs can benefit immensely from sophisticated alerts that can distinguish abnormal deviations from normal, everyday fluctuations. As customers’ infrastructure rapidly scales to accommodate changing workloads, what constitutes a normal/healthy threshold often will need to scale accordingly. Customers may also wish to track critical business metrics, such as transactions processed, which often exhibit normal, user-driven fluctuations that correlate with the time of day or the day of the week.

Both of these scenarios explain why threshold-based alerts, while helpful for many types of metrics, are not ideal solutions for detecting more complex issues with modern-day applications. To accommodate these challenges, next-generation MSPs need a monitoring solution that uses machine learning to automatically detect issues in their customers’ metrics. Datadog’s anomaly detection algorithms are designed to distinguish between normal and abnormal trends in metrics while accounting for directional trends, such as a steady increase in transaction volume over time and seasonal fluctuations.

Datadog also uses machine learning for outlier detection—algorithms that determine when a host or group of hosts behaves differently from its peers. This effectively enables MSPs to make sense of how resources are being used within a customer’s infrastructure, even as it rapidly scales to accommodate varying workloads. Whenever an outlier monitor is triggered, MSPs can consult the monitor status page, like the one shown below, to quickly understand when the outlier was detected, and which component(s) of the infrastructure it may impact.

 

Analyzing historical data to gain insights and develop new solutions

As their customers’ environments scale and grow increasingly complex, MSPs need an effective way to visualize how all of those components change over time. For historical analysis, all data is retained at full granularity for 15 months. This allows MSPs to analyze how their customers’ infrastructure and applications have evolved and develop strategies that help them make strategic decisions going forward. In addition to visualizing AWS services and other common infrastructure technologies in default dashboards, MSPs can create custom visualizations that deliver deeper insights into their metrics. These visualizations include:

  • Trend lines: Use regression functions to visualize metric trends
  • Change graphs: Display how a metric’s value has changed compared to a point in the past (an hour ago, a week ago, etc.)
  • Heat maps: Use color intensity to identify patterns and deviations across many separate entities. In the example below, a Datadog heat map shows Docker CPU usage steadily trending upward across a large ensemble of containers

 

Ensuring continuous compliance in an era of infrastructure-as-code

Infrastructure-as-code has revolutionized the way that organizations deploy new assets and manage their existing resources, enabling them to become more agile, continuously deploy new features, and quickly scale resources to respond to changing workloads. However, as these tools are more widely adopted, they also require organizations to monitor their assets more carefully, in order to meet compliance requirements.

Datadog integrates with key infrastructure-as-code tools like Chef, Puppet, and Ansible to provide MSPs with a real-time record of configuration changes to each customer’s infrastructure. Datadog also ingests AWS CloudTrail logs to help MSPs track API calls made across AWS services and aggregates them in the event stream for easy reference. In the example below, you can see that CloudTrail reports any successful and failed logins to the AWS Management Console, as well as any EC2 instances that have been terminated—and who terminated them.

 

With all of this data readily available, MSPs can track critical changes as they occur in real time and set up monitors to proactively audit and enforce continuous compliance of their customers’ AWS environments. They can also search and filter for specific types of changes in the event stream and then overlay them on dashboards for correlation analysis, as shown below.

 

Event-based alerts help MSPs automatically detect unexpected changes and/or immediately notify their customers about events that may endanger compliance requirements. These alerts can also be configured to trigger actions in other services through custom webhooks. By making all of this information available in one central location, Datadog prepares MSPs with the data they need to respond quickly to compliance issues.

Next steps for next-generation MSPs

Datadog is pleased to be able to provide monitoring capabilities that help MSPs navigate the challenges of delivering high-performance solutions for dynamic infrastructure and applications. To learn more about how Datadog helps fulfill AWS MSP Partner Program checklist items needed to apply for the AWS Managed Service Program, download our free eBook. You can also view a recording of our recent webinar with AWS and CloudHesive, “What is Means to be a Next-Generation Managed Service Provider” here.

AWS HIPAA Program Update – Removal of Dedicated Instance Requirement

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect with AWS

I love working with Healthcare Competency Partners in the AWS Partner Network (APN) as they deliver solutions that meaningfully impact lives. Whether building SaaS solutions on AWS tackling problems like electronic health records, or offering platforms designed to achieve HIPAA compliance for customers, our AWS Healthcare Competency Partners are constantly raising the bar on what it means to deliver customer-obsessed cloud-based healthcare solutions.

Our Healthcare Competency Partners who offer solutions that store, process, and transmit Protected Health Information (PHI) sign a Business Associate Addendum (BAA) with AWS. As part of the AWS HIPAA compliance program, Healthcare Competency Partners must use a set of HIPAA-eligible AWS services for portions of their applications that store, process, and transmit PHI. You can find additional technical guidance on how to configure those AWS services in our HIPAA security and compliance white paper. For any portion of your application that does not involve any PHI, you are of course able to use any of our 90+ services to deliver the best possible customer experience.

We are rapidly adding new HIPAA-eligible services under our HIPAA compliance program, and I am very excited to see how Healthcare Competency Partners are quickly adopting these new services as part of their solutions involving PHI. Today, I want to communicate a recent change to our HIPAA compliance program that should be positively received by many of our APN Partners in Healthcare and Life Sciences – APN Partners who have signed a BAA with AWS are no longer required to use Amazon EC2 Dedicated Instances and Dedicated Hosts to process PHI. APN Partners and other AWS customers should continue to take advantage of the features of VPC as they migrate from Dedicated Instances or Dedicated Hosts to default tenancy.

Over the years, we have seen tremendous growth in the use of the AWS Cloud for healthcare applications. APN Partners like Philips now store and analyze petabytes of PHI in Amazon S3, and others like ClearDATA provide platforms which align to HIPAA or HITRUST requirements for their customers to build on. Customer feedback drives 90+% of our roadmap, and when we heard many customers and APN Partners requesting this change, we listened.

Optimizing your architecture

One of our Leadership Principles at Amazon is “Invent and Simplify”. In the spirit of that leadership principle, I want to quickly describe several optimizations I anticipate APN Partners might make to simplify their architecture with the aforementioned change to the AWS HIPAA compliance program.

As always, if you have specific questions, please reach out to your Partner Manager or AWS Account Manager and they can pull in the appropriate resources to help you dive deeper into your optimizations.

Optimizing compute for cost and performance

With default tenancy on EC2, you can now use all currently available EC2 instance types for architecting applications to store, process, and transmit PHI. This means that you can leverage Spot instances for all instance types, such as for batch workloads, as well as use our burstable compute t2 family of EC2 instances in your applications, rather than using the m3 or m4 instance family.  You should continue to take advantage of the features of VPC as you migrate from Dedicated Instances or Dedicated Hosts to default tenancy.

Right-sizing for Microservices

Many of our Healthcare Competency Partners, especially those who build SaaS applications, use microservices architectures. They often use Amazon ECS for Docker container orchestration, which runs on top of Amazon EC2. The ability to use default tenancy EC2 instances for PHI will enable you to further right-size your applications by not having to factor in Dedicated Instances or Dedicated Hosts.

Simplifying your big data applications

Amazon EMR is a HIPAA-eligible service that many Healthcare Competency Partners use to analyze large datasets containing PHI. When using dedicated tenancy, these Partners needed to launch EMR clusters in VPCs with dedicated tenancy. This is how an architecture might look using dedicated tenancy, where the left side is a VPC with dedicated tenancy interacting with an Amazon S3 bucket containing PHI.

With the new update, you can logically consolidate these two VPCs into a single default tenancy VPC, which can simplify your architecture by removing components such as VPC peering and ensuring that your CIDR blocks didn’t overlap between VPCs.

Partner segregation by account rather than VPC

Many of our Healthcare Competency Partners, especially managed services providers (MSPs), prefer to segregate their customers or applications into different accounts for the purposes of cost allocation and compute/storage segregation. With the removal of the requirement Dedicated Instances or Dedicated Hosts, you can more easily segregate customers and applications into the appropriate accounts.

Conclusion

For more information on HIPAA on AWS, please see this blog post by our APN Compliance Program Leader, Chris Whalley, as well as check out our HIPAA in the Cloud page.

If you have any questions, please feel free to reach out to your AWS Account Manager or Partner Manager, and they can help direct you to the appropriate AWS resources. You can also email apn-blog@amazon.com and we will route your questions to the appropriate individuals.