AWS Partner Network (APN) Blog

Qubole featured in AWS “This is My Architecture”

by Kate Miller | on | in AWS Partner Solutions Architect (SA) Guest Post, Big Data, Big Data Competency | | Comments

By Paul Sears. Paul is a Partner Solutions Architect (SA) at AWS. 

Qubole opened up the hood of their architecture of their Qubole Data Service (QDS) as Suresh Ramaswamy, Senior Director Engineering from Qubole discusses how they built their big data platform (QDS) on AWS. Suresh outlines how QDS manages the data processing infrastructure, automatically scaling the processing cluster as needed and how QDS can leverage AWS spot instances to help manage costs. You can learn more about Qubole’s architecture on AWS here:

Wrap Up: Cross-Account Role Onboarding Workflow

by Ian Scofield | on | in AWS CloudFormation, AWS Partner Solutions Architect (SA) Guest Post, How-to Guide, Security | | Comments

By Ian Scofield. Ian is a Partner Solutions Architect (SA) at AWS. 

Over the course of three blog posts (Parts 1, 2, and 3), we’ve been discussing a new way for APN Partners to onboard customers, focusing on the creation of cross-account roles.  Throughout this series, we’ve proposed the usage of AWS CloudFormation for the creation of this role, to provide an automated process that can seamlessly integrate into an existing onboarding workflow.

In this post, we’ll recap the components discussed so far, put it all together, and show you the completed workflow.  As we move along, each step we discuss (designated as Step 1, 2, 3, etc. in the text) will correlate with a numbered arrow in the following diagram.

As you’ll recall from the earlier posts, to kick off the onboarding process, we first ask the customer to provide their AWS account ID. (Step 1)  We use this information as follows:

  • We whitelist the provided account ID in our non-public Amazon SNS topic to adhere to security best practices. (Step 2)
  • We create a customer tracking entry in our Amazon DynamoDB table. (Step 3)
  • We associate a uniquely generated external ID with the provided account ID and store in our DynamoDB Table (Step 3).

After storing the relevant information and updating the relevant policies, we generate a customized CloudFormation template. (Step 4)  In Part 2 of this series, we demonstrated a method for generating customized templates using an AWS Lambda function to replace values.  This customized template will contain the AWS account ID the customer will trust, and the uniquely generated external ID the partner uses to assume the role.

There are two options available: include the two IDs as the default values for the CloudFormation parameters, or hardcode the IDs into the resources section of the template.  If we prepopulate the parameters with the IDs, the user can easily inspect the IDs, but might override them with incorrect data.  If we hardcode the IDs into the resources section, the user will not be presented with any parameters to fill in. This removes the risk of entering erroneous values at the cost of visibility.  We prefer the second option, as it increases the robustness of the process.

Once this customized template has been generated and uploaded to an Amazon S3 bucket, we provide the user with a launch stack URL.  When the user clicks this link (Step 5), they will be directed to the AWS CloudFormation console with the stack name and template location prepopulated, so they can launch the customized template easily.  Embedded in this template is a custom resource, which automatically sends the Amazon Resource Name (ARN) for the created cross-account role back to the APN Partner. (Step 6)  Once the APN Partner receives the role ARN, they can be sure that the CloudFormation template has been launched.  Additionally, this process eliminates all manual steps, including the customer copying and pasting the created ARN back into the APN Partner’s onboarding workflow.

Custom resources provide an additional benefit. They are triggered whenever the CloudFormation stack is updated or deleted.  If the customer modifies the template or deletes the stack, a notification will be sent to the APN Partner, which allows them to react to the change appropriately.  For example, if a user deletes the stack, the partner can reach out and ask if there were any technical problems that led to them discontinuing the service, provide an exit survey, or trigger an off-boarding workflow.

On the APN Partner side, this custom resource takes the cross-account role ARN and puts it in the DynamoDB table that contains the customer data. (Step 7) We now have a customer record that contains their AWS account ID, a unique external ID, and a cross-account role ARN, which can be used together to assume the role in the customer’s account.  As a final cleanup step, the custom resource deletes the customized template from the S3 bucket, since it’s no longer needed.

In summary, we’ve provided a workflow where the customer simply provides their AWS account ID, clicks a launch stack button, and then follows the CloudFormation wizard.  At no point does the user have to interact with the IAM console, copy and paste policy documents and role ARNs, or engage with any other potentially error-prone or manual steps.

With this workflow, we’ve reduced the chances of failure due to misconfiguration or incorrectly typed values, and we’ve minimized onboarding friction to reduce the chance of customers getting stuck or frustrated and abandoning the onboarding process.  As an added benefit, we’ve also provided a way to capture the removal of the trust policy and cross-account role.  This allows APN Partners to proactively reach out to determine if the removal was due to accidental deletion, and engage with customers if there were any technical issues that led to them tearing down the role.

Be sure to read the preceding posts for more detail on each of these steps.  Let us know if you have any questions or comments regarding this process!

Streamlined Cloud Data Management with Commvault

by Kate Miller | on | in APN Competency Partner, Partner Guest Post, Storage | | Comments

By Michael Barrow, Technical Alliance Architect, Cloud Providers, Commvault. Commvault is an Advanced APN Technology Partner and AWS Storage Competency Partner.

Cloud data management is an important part of your cloud journey, but it doesn’t have to be overly complicated. When your cloud strategy includes automated, repeatable processes for data migration, data management, and using data in the cloud, you can more quickly realize the benefits of your cloud projects.

Commvault: evolved data protection for a cloud-first world

Commvault, a comprehensive data management platform for both on-premises and cloud data, provides holistic data protection and data management (e.g., archiving, compliance, cataloging, e-Discovery, or identification and classification). Commvault software interoperates with every major operating system, application, and storage platform.

With Commvault, organizations can deftly manage business-critical information living on virtual machines (VMs) or physical servers, whether they are deployed in traditional data centers, the cloud, or both.

Commvault data protection software drives data protection, data backup, and data management for both on-premises and cloud resources.

Commvault software integrates with Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS), Amazon Simple Storage Service (Amazon S3 & S3-IA), Amazon Glacier, and AWS Snowball (Snowball). We find that the Commvault solution can help AWS customers meet their requirements in the AWS Shared Responsibility Model. Plus, with the ability to convert Microsoft Hyper-V or VMware virtual machines into Amazon EC2 instances, Commvault can help optimize the migration of on-premises workloads onto AWS.

In addition to standard applications (Oracle, Exchange, SQL Server, SharePoint, SAP, DB2, Informix, MySQL, and PostgreSQL), Commvault provides protection for leading big data applications, like Greenplum, Hadoop, and MongoDB. Commvault software can also preserve assets stored in Software as a Service (SaaS) platforms, like Microsoft Office 365 (O365), Salesforce, Gmail, or Google Drive.

Finally, Commvault offers both source and target compression and dedupe to minimize load on Amazon EC2 instances during backup and/or archive operations, and greatly reduce network bandwidth and storage consumption.

Joint Use Cases: AWS and Commvault Data Management

With tight integration between AWS and Commvault, joint customers can see significant value in a variety of use cases.

Data backup to the cloud replaces tape storage

We find that replacing complicated and expensive tape operations with cloud-based data backup is one of the quickest and easiest ways to start a transition to the cloud.

Commvault provides native support of both the Amazon S3 and Amazon Glacier APIs, so you don’t need to install and manage complex cloud gateways.

In just minutes, an on-premises Commvault installation can be configured for data backup to Amazon S3 or Amazon Glacier. The whole process consists of a few simple steps: set up an account on AWS, create a bucket, create a cloud library pointing at the bucket, and finally write data to the cloud library.

Add cloud storage dialog

See a short video demo of Commvault data backup to Amazon S3: Two clicks to the Cloud with AWS.

Data protection for Amazon EC2-based workloads

Commvault provides two choices for protecting Amazon EC2 workloads. First, the Commvault Virtual Server Agent (VSA) manages AWS-native data protection, for protecting file-based workloads or creating disaster recovery copies of instances. Commvault can create and manage snapshots of Amazon EC2 (and Amazon RDS) instances and manage copying snapshots to other regions. Commvault provides protection without the pain of maintaining complex scripts — saving effort, improving recovery, and reducing costs.

On the other hand, when application-consistency or granular recovery is needed, installing the Commvault agent within the guest OS is the way to go. It provides backups and archiving, and supports more flexible recovery options.

The Commvault rule groups feature even supports custom tags to select Amazon EC2 instances.

Fast, secure data migration to Amazon EC2

Commvault customers also redeploy on-premises VMs to AWS for test, disaster recovery, or permanent migration. With Commvault, you can backup VMs at the hypervisor level (Microsoft Hyper-V or VMware), restore the image onto AWS and convert into an Amazon EC2 instance. As part of the restore and conversion process, you can set the name of the instance, the availability zone, instance type, and network and security group settings.

Even moving hundreds of terabytes to AWS is a cinch with Commvault integration with Snowball. Rather than pointing a Commvault cloud library at Amazon S3, point to a host running the Snowball adapter and Commvault MediaAgent software. Once the data is transferred to AWS, you can reconfigure the cloud library to point at the Amazon S3 endpoint to continue normal backup and/or archive operations.

Architecture for VM migration from on-premises to Amazon Web Services

Data protection, compression, and deduplication for Amazon S3 buckets

AWS customers have several choices to preserve critical data written to Amazon S3: bucket versioning, cross-region replication, and MFA delete.

Commvault provides another protection option via the same policy-driven framework used for filesystem and application data. An appropriate storage policy can drive object copies to Commvault and into an Amazon S3 bucket in the same region as the origin or another region, an Amazon Glacier archive, or even an on-premises storage array. Just like any other data written to Commvault, compression and dedupe can significantly reduce the cost of data copies.

Archive SaaS and Office 365 to AWS

SaaS application and data backup is critical, especially as more line of business owners purchase new SaaS offerings and expect IT to support them in the cloud.

O365 is a popular example. Although O365 has offerings for content archival, an organization may want to manage this process on a disparate infrastructure as part of a larger compliance strategy or to add provider diversity to their stack.

Commvault data backup and archive of Exchange data from O365 can be deployed on Amazon EC2 instances with content stored in a compressed and deduped format within Amazon S3 or Amazon Glacier. With the add-in for Outlook installed on desktops, end users can seamlessly access messages and attachments that have been archived.

Additionally, corporate compliance personnel can use Commvault e-Discovery capabilities to search and execute legal holds against data in the AWS-resident archive without needing to interact with the O365 deployment.

The result is a system that provides infrastructure diversity, cost control, and centralized governance of critical business data.

Secure, Flexible Data Management

Commvault provides a single solution for managing data across files, applications, databases, hypervisors, and cloud. Commvault has complete data management capabilities across data backup, recovery, management and e-Discovery, that are tightly integrated with the functionality offered by AWS.

Best of all, Commvault gives you a “single pane of glass” to manage data protection and data management needs across the entire infrastructure and application stack, whether it is on-premises, on AWS, or a combination of the two.

Learn more about Commvault data management for AWS.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Collecting Information from AWS CloudFormation Resources Created in External Accounts with Custom Resources

by Erin McGill | on | in AWS CloudFormation, AWS Partner Solutions Architect (SA) Guest Post, How-to Guide, Security | | Comments

By Erin McGill. Erin is a Partner Solutions Architect (SA) at AWS. 

Throughout this series, we’ve talked about easing cross-account role creation with AWS CloudFormation and a custom stack URL.  We’ve also discussed how to dynamically generate CloudFormation templates to populate a unique external ID. But once your cross-account role is created, how does the Amazon Resource Name (ARN) for the generated cross-account role in the customer account get back to your company’s onboarding portal? Asking the customer to cut and paste the output ARN is possible, but it’s a manual step that’s prone to human error.

In this post, we’ll describe how to use a custom resource to send the cross-account role ARN to the onboarding portal. A custom resource enables you to add custom provisioning logic to a CloudFormation template. AWS CloudFormation runs this code anytime a stack is created, updated, or deleted. By adding a custom resource to the template that is launched in your customer’s account, you can post a custom resource request that includes the cross-account role ARN back to an Amazon SNS topic.


Every time the CloudFormation stack is updated or deleted, it will trigger the custom resource.  

Even if no action is taking place within the custom resource during these stages, you need a mechanism to signal back to the ResponseURL, a success or failure. The ResponseURL is provided in the custom resource request. If the ResponseURL does not receive a success or failure, the CloudFormation stack will remain in an UPDATE_IN_PROGRESS or DELETE_IN_PROGRESS state, and will ultimately fail upon reaching the timeout waiting for a response.


The SNS topic runs in your company’s AWS account, which is associated with your onboarding portal, as shown in the following illustration.

The onboarding portal then takes the message that arrives in the SNS topic and uses it to associate the new cross-account role ARN with the customer’s account ID and unique external ID. You can use an Amazon DynamoDB table as a tracking mechanism. This table stores the cross-account role ARN in an item that also contains the customer’s account ID and unique external ID from earlier in the onboarding process.


The SNS topic in your company’s AWS account that receives the cross-account role ARN from a customer’s AWS account should not be publicly accessible—it should be restricted to only those authorized to send messages to the topic.  In a previous post in this series, we explained how the user-provided AWS account ID is associated with our generated unique external ID for the customer in a DynamoDB table. With the AWS account ID, you can also whitelist this account in the SNS topic policy to keep your AWS resource secure.


When your customer launches the CloudFormation stack, the template creates the custom resource. As demonstrated in our previous blog post, the Service Token for the custom resource is prepopulated with the SNS topic associated with the onboarding portal in your company’s AWS account. Once the cross-account role is created, the custom resource is triggered, and it sends the cross-account role ARN, your customer’s AWS account ID, and any other additional information you want to include to the SNS topic in your company’s AWS account.

Since the stack remains in the customer’s account, it will continue to provide a mechanism for capturing changes to the CloudFormation stack, including deletion, back to your company.  For example, if a customer deletes the CloudFormation stack, thereby deleting the cross-account role, a notification will be sent to the same SNS topic.  This will allow you or your support staff to proactively reach out for troubleshooting or assistance.  We recommend removing the customer’s AWS account ID from the SNS topic policy during the stack deletion phase, as they will need to restart the workflow to re-onboard, which will, in turn, re-whitelist their account ID.

Here’s the Resources section from an example CloudFormation template that demonstrates how to create the cross-account role and send it back to your company’s SNS topic:

Resources:
# This is the cross-account role that will be created
  CrossAccountRole:
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Action: 'sts:AssumeRole'
          Effect: Allow
          Principal:
            AWS: arn:aws:iam::111111111112:root #The Master Account root ARN
          Condition:
            StringEquals:
               sts:ExternalId: 12345 # generated Customer External ID unique to this customer
          Sid: ''
        Version: '2012-10-17'
      Path: "/"
      Policies:
      # This role should have only the permissions necessary
- PolicyDocument:
          Statement:
          - Action:
            - "ec2:Describe*"
            Effect: Allow
            Resource: "*"
          Version: '2012-10-17'
        PolicyName: CustomerAccountAccess
    Type: 'AWS::IAM::Role'
  # This custom resource phones home the cross-account ARN information to your company’s SNS topic
  PhoneHomeCustomResource:
    Properties:
      ServiceToken: arn:aws:sns:us-west-2:111111111112:ARNSnsTopic # Your Company’s SNS Topic Arn 
      RoleArn: !GetAtt CrossAccountRole.Arn # The ARN of the IAM cross-account role generated above
      AccountID: !Ref AWS::AccountId # The customer’s AWS account ID
    Type: Custom::PhoneHomeCustomResource
    Version: '1.0'

This CloudFormation YAML script has one section, Resources, with two resources defined within it:

•	CrossAccountRole
•	PhoneHomeCustomResource

The CrossAccountRole resource creates the IAM role that your company can assume in the customer’s account. It consists of three properties:

  • AssumeRolePolicyDocument – Identifies your company by your account ARN, provided in the Principal, and sets the condition that your company account is allowed to assume the role only as long as the external ID is provided when assuming the role.
  • Path – A friendly name or path for the role. We’ve chosen to set this to a forward slash for simplicity.
  • PolicyDocument – Defines the permissions that this role will be granted. In this example, we are granting the ability to describe EC2 instances in the account and naming the policy

The PhoneHomeCustomResource creates a custom resource with three properties:

  • ServiceToken – Identifies the endpoint that gives the template the ability to access the custom resource in your company’s account. In this case, the service token is the SNS topic ARN in your company’s account. This service token must be from the same region as the stack being created. This SNS topic will, in turn, trigger a Lambda function. This function can perform any onboarding required by your workflow, but, most importantly, sends back the SUCCESS status to CloudFormation so it can proceed with launching the stack.
  • RoleArn – The ARN of the CrossAccountRole
  • AccountID – The account ID of the customer’s account that is launching your CloudFormation stack.

CloudFormation is able to identify dependencies and will wait till the RoleArn resource has been created.  Once created, the custom resource will be invoked and the RoleArn and theAccountID values will be sent to the ServiceToken, which is the SNS topic in your company’s account.

In this blog series, we showed you how to create a URL to launch a stack in your customer’s account, dynamically generate a CloudFormation template to include unique data for each customer, and automatically acquire and send information back about created resources, such as IAM cross-account role ARNs. In the next, and final, post in this series, we’ll provide a walkthrough of the entire process.

Do you have any questions or comments? Let us know in the comments section, and check back soon for the final post in this series.

 

Using TSO Logic Data Analytics and Data Modeling to Demonstrate Cost Advantages of AWS Cloud Migration

by Aaron Rallo | on | in APN Technology Partners, Migration, Partner Guest Post | | Comments

By Aaron Rallo, CEO, TSO Logic.

This is a guest post from TSO Logic, an Advanced APN Technology Partner and AWS Migration Competency Partner.

Let’s say you’re considering migrating to the AWS Cloud, but first want to get a solid grasp on the economics. Maybe your team hasn’t had the time to identify how much compute you actually are using so they can evaluate potential migration costs. Or maybe your team is running numbers based on limited, static data and they have concluded that migration would be more expensive. If you are planning a migration to AWS then take a look at the solutions offered by TSO Logic. TSO Logic is an Advanced Technology Partner and AWS Migration Competency Partner whose platform can help you analyze the performance and financial characteristics of every workload you are running.

Over the past few months, while considering large-scale data center migration to AWS, six customers used TSO Logic to discover the scope of their environment and analyze utilization patterns. The outcome was a data-driven business plan and cost model for cloud to fast-track cloud transformation. The following sections show how TSO Logic was used to analyze the real-time data from these customers with a focus on compute usage, utilization, and instance sizing.

Let’s take a look at some insights from the analysis for the initial cost-modeling done with TSO Logic for these six customers.

High-level Insights and Takeaways

The first step to cloud migration is to have a good plan in place, but it’s important to note that cost-modeling should not be solely a one-time exercise. Customers who migrate to AWS are able to continue to optimize and save costs long-term because of their ability to continuously innovate and take advantage of new services and technology (such as building a serverless architecture, improving automation, and so forth) that enable them to continue reducing their total cost of ownership.

Insight from the TSO Logic Analysis: Common Oversights When Initially Calculating the Cost of the AWS Cloud:

  1. As previously noted (but well worth repeating), cost modeling is not a one-and-done exercise because the variables, such as compute patterns, applications, and cloud service catalogs, are always changing. Modeling on a consistent basis will ensure that you are always getting the most compute for your investment.
  2. Overprovisioning happens. For many enterprises, compute was intentionally overprovisioned to meet the demands of unexpected spikes in workload. With the elasticity of the cloud, there is no longer a need to overprovision to that extent. It is important to take this into consideration when creating on-premises to cloud cost models.
  3. AWS is continuously innovating so you don’t have to. Hardware is frequently updated and new instance families and types are often being added. When you compare your current on premises costs to cloud costs it’s important to consider the differences in compute power.
  4. Rightsizing is key to an accurate model. When the rightsizing of environments was accurately accounted for (e.g., for utilization levels, processing power, etc.) with the six customers who used the TSO Logic Platform, AWS migration led to at least a 26% reduction in annual cost savings for compute and up to 60% savings compared to on-premises. This includes hardware amortization, maintenance, OS licensing, and facility (it doesn’t include savings related to labor).

A Real-World Field Study:
Leveraging the TSO Logic Platform to Analyze and Rightsize On-Premises OS Instances

The Raw Numbers

AWS/TSO Logic Customers: 6
On-premises OS Instances Evaluated: 33,936 Operating Systems Instances
Current On-premises Costs: $58,224,000/yr.
TSO Logic Rightsized to AWS: $42,816,000/yr. (26% savings over on-premises)


The Findings Using the TSO Logic Platform

  • Rightsizing for economic sense. When cloud instances can be rightsized based on historical usage and utilization patterns, TSO Logic demonstrated that AWS is more economical and the customers whose data was analyzed could save 26% or more over their current costs.
  • Older hardware, server refresh. When TSO Logic looked at the 15,270 instances that were running on hardware that was more than five years old, it was determined that migrating these instances to the AWS Cloud could reduce costs by 74%. Overall, one of the most immediate economic benefits of migration comes from instances that are running on older hardware — the older the hardware on premises, the more economical the cloud becomes according to the analysis done. Cloud also has the added benefit of reducing upfront capital investments in new hardware. In addition, since AWS is always innovating and releasing new instance types, you get the benefit of the latest hardware without any additional capital expenditure.
  • Environments. When TSO Logic looked at the 2,883 instances in the sample set running in test-dev environments on premises, it was determined that migrating would save $2.8 million/yr. for these customers — a minimum savings of 42%.

The Bottom Line

To accurately evaluate the cost of cloud migration, you first have to know the actual historical usage and utilization patterns of your workloads and your true provisioning needs. Take a look at how TSO Logic’s solution can help you as you look to understand your business case for migration and how you can take advantage of the cost benefits of running on AWS. www.tsologic.com/aws

Want to learn more? Learn how TSO Logic can help you plan your migration to AWS during our upcoming webinar. Register here.

About the Data

The economic models for rightsizing and right costing the environments detailed in this post were created using data points ingested into the TSO Logic agentless software platform. Data included provisioned compute and historical usage and utilization data, along with a pre-existing benchmark catalog of on-premises costs. The models identified the direct-match and the rightsized options along with the associated costs from the AWS catalog.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

AWS IoT on Mongoose OS, Part 2

by Tim Mattison | on | in AWS IoT, AWS Partner Solutions Architect (SA) Guest Post, How-to Guide, IoT | | Comments

Enhancing IoT security with Mongoose OS, Espressif, Microchip, and AWS IoT

By Tim Mattison. Tim is a Partner SA focused on IoT.

Background

Our previous post, AWS IoT on Mongoose OS, Part 1, described how Mongoose OS can connect an Espressif ESP8266 or ESP32 to AWS IoT. It explained how AWS IoT sets the bar very high for security with Transport Layer Security (TLS) mutual authentication that assures both the client and server that they’re communicating with the correct system.

How can continue to optimize? In this post, I will explain how you can use hardware-based cryptographic functions to improve both security and performance in an IoT deployment.

Dedicated cryptography

Many microcontrollers have neither dedicated cryptographic instructions nor protected flash or secure elements. Not having the proper cryptographic instructions means that certain cryptographic operations have a significant effect on power consumption and tend to be time-consuming. A lack of protected flash or secure elements means that a microcontroller’s TLS certificates can be extracted from the hardware, copied, and used to impersonate the device, using readily available debugging tools.

The ATECC508A CryptoAuthentication™ device from Microchip Technology combines hardware-based cryptographic functions and secure storage in a design that resists attack through physical, electrical, and software means. The device connects through an I2C interface to a microcontroller. The microcontroller then uses a simple command set to perform cryptographic operations on data with a private key that stays on the ATECC508A. The ATECC508A can internally generate private keys, or can store private keys generated by an external system.  During product development, this external system might be a developer’s computer.  At full production volumes, this external system is typically a high-speed hardware security module (HSM) installed in a secure manufacturing facility.

By eliminating the need for the host processor to handle cryptographic operations, the ATECC508A can help enhance security and performance. Microcontroller-based designs using the ATECC508A can establish TLS connections faster than software-only TLS implementations.

AWS has worked closely with Microchip and Cesanta to provide a way to use Microchip’s ATECC508A device with the ESP8266 and ESP32 on Cesanta’s Mongoose OS platform. In this post, we’ll walk through this process step by step. At the end of this post, you’ll have an inexpensive platform suitable for development, prototyping, and production.

Wiring

You’ll need an ESP8266 NodeMCU device and an ATECC508A chip. The ATECC508A can be obtained either as an ATCRYPTOAUTH-XPRO board, which requires no soldering, or a bare-bones ATECC508A, which requires soldering.

Function ATECC508A pin ESP8266 pin NodeMCU pin ATCRYPTOAUTH pin
SDA 5 10 (GPIO12) D6 11 (yellow)
SCL 6 9 (GPIO14) D5 12 (white)
GND 4 Any suitable GND 19 (black)
VCC 8 Any suitable 3V3 20 (red)

Wiring for ATCRYPTOAUTH-XPRO:

Wiring for the bare-bones ATECC508A:

Setup

When the ATECC508A chip is wired, it’s time to configure it.

1. Generate a certificate and key. You can create a self-signed certificate or use your own certificate authority (CA). You’ll need to generate an Elliptic Curve Digital Signature Algorithm (ECDSA) certificate using the P-256 curve, because the ATECC508A supports that certificate type.

$ openssl ecparam -out ecc.key.pem -name prime256v1 -genkey
$ openssl req -new -subj \
 "/C=IE/L=Dublin/O=ACME Ltd/OU=Testing/CN=test.acme.com" \
 -sha256 -key ecc.key.pem -text -out ecc.csr.tmpl
$ openssl x509 -in ecc.csr.tmpl -text -out ecc.crt.pem \
 -req -signkey ecc.key.pem -days 3650

2. Flash your device with Mongoose OS, as we described in step 1 of the previous post.

3. Use the Mongoose OS I2C.Scan function to verify that the chip is wired properly and functioning as expected.  You should expect the mos tool to respond with [ 96 ], which is the I2C address of the ATECC508A. If it does not, go back and verify your wiring or try another chip if possible.

$ mos call I2C.Scan
[ 96 ]

4. Configure the chip. You can use the sample configuration provided in the Mongoose OS Git repository. Save the configuration as atca-aws-test.yaml and set it with the extended mos commands:

$ mos config-set sys.atca.enable=true
$ mos -X atca-set-config atca-aws-test.yaml --dry-run=false
$ mos -X atca-lock-zone config --dry-run=false
$ mos -X atca-lock-zone data --dry-run=false

Note: These changes are irreversible: Once zones are locked, they cannot be unlocked. Also, this sample configuration is very permissive and is only suitable for testing; do not use it for production deployments. Please refer to the Microchip manual and other documentation when creating a production configuration.

5. Write the generated key into the secure element. If you used the sample configuration, this is a two-step process:

a. Generate and set the key encryption key in slot 4:

$ openssl rand -hex 32 > slot4.key
$ mos -X atca-set-key 4 slot4.key --dry-run=false

 AECC508A rev 0x5000 S/N 0x012352aad1bbf378ee, config is locked, data is locked
 Slot 4 is a non-ECC private key slot
 SetKey successful.

b. Set the ECC key in slot 0:

$ mos -X atca-set-key 0 ecc.key.pem --write-key=slot4.key --dry-run=false

 AECC508A rev 0x5000 S/N 0x012352aad1bbf378ee, config is locked, data is locked
 Slot 0 is a ECC private key slot
 Parsed EC PRIVATE KEY
 Data zone is locked, will perform encrypted write using slot 4 using slot4.key
 SetKey successful.

6. Upload the public signed certificate to the device:

$ mos put ecc.crt.pem

7. Set the HTTP server configuration to use the uploaded certificate and private key from the device’s slot 0:

$ mos config-set http.listen_addr=:443 http.ssl_cert=ecc.crt.pem http.ssl_key=ATCA:0

Getting configuration...
Setting new configuration...
Saving and rebooting…

At startup, you should see the following in the device’s log:

mgos_sys_config_init_http HTTP server started on [443] (SSL)

And when connecting with the browser, you should see the following:

ATCA:2 ECDH get pubkey ok
ATCA:0 ECDSA sign ok
ATCA:2 ECDH ok

Perform AWS IoT setup and connect

Follow the MQTT example in the Mongoose OS Git repository.  After setting the Wi-Fi credentials, run this command to provision the ESP8266 board in AWS IoT and use the secure element:

$ mos aws-iot-setup --use-atca --aws-iot-policy=mos-default

What to expect

At this point, you should be connected to AWS IoT using the secure element. On an ESP8266, the connection negotiation time will drop from 10 seconds or more, to less than one second. Your certificate is now protected in the ATECC508A and will be used to authenticate your device to AWS IoT whenever it needs to reconnect. This platform gets you one step closer to a secure, production deployment.

Are you using the components we discussed in this post?  We love to see customer projects, products, and demos. Have questions or feedback? Let us know!

AWS Education Competency Partners – Supporting Education Customers and Their Missions Worldwide

by Kate Miller | on | in AWS Competencies, AWS Public Sector, AWS Summits, Education | | Comments

At re:Invent 2016, we launched the AWS Public Sector Program to recognize APN Partners with solutions and experience delivering government, education, and nonprofit customer missions globally. The goal of the AWS Public Sector Program is to provide specific guidance and resources to help APN Partners build a successful business while focusing on serving the specific needs of public sector customers.

The AWS public sector customer base continues to grow at a rapid pace. In the Education space specifically, higher education, K-12, and Research IT teams worldwide are using the AWS Cloud to help solve challenges such as disaster preparedness, scalability, and improved student outcomes through learning analytics while concentrating on what matters most to them: helping drive the success of their students through innovative IT solutions and other organization initiatives. Our priority is to help education customers on AWS easily identify and connect with APN Partners with proven expertise and customer success in the education space who can help them take advantage of all that AWS offers.

To help customers easily identify and connect with APN Partners in the education space, today we’re excited to announce the launch of the AWS Education Competency.

What is the AWS Education Competency?

The AWS Education Competency helps customers identify Advanced or Premier tier APN Partners who have proven technical proficiency and demonstrated success building solutions for education customers using the AWS Cloud to drive success in teaching and learning, academic research, and institutional operations.

For APN Partners with expertise in this space, the AWS Education Competency is a fantastic way to differentiate your practice, showcase your commitment to drive success for the education community, and demonstrate your expertise in employing AWS architectural best practices. Achieving the AWS Education Competency is not easy. In meeting the high bar set by the AWS Education Competency, customers can be confident in your firm’s proven ability to deliver services or solutions to help meet their needs and provide additional value on AWS.

Congratulations to the following launch AWS Education Competency Partners

Education Consulting Partners have proven success and expertise in the design, building, deployment, and management of mission-critical workloads in the AWS Cloud for primary, secondary, post-secondary, and academic research education customers.

Congratulations to the following launch Consulting Partners!

  • Cloudmas
  • DLT Solutions
  • Enquizit Inc
  • Infiniti Consulting Group
  • ITEra
  • NorthBay
  • REAN Cloud

Education Technology Partners demonstrate expertise in the following categories:

Teaching and Learning

Solutions that bring innovative teaching and learning technology to help educators adapt to new standards, personalize learning, and deliver new and exciting digital learning experiences to students.

Congratulations to the launch Technology Partners in this category!

  • Alfresco
  • Blackboard
  • D2L
  • Echo360
  • Instructure
  • Remind

Administrative and Operations

Solutions that assist educational institutions operate efficiently both on-campus and online. Technology that empowers educational institutions to gain better visibility into IT operations and assist with regulatory requirements.

Congratulations to the launch Technology Partners in this category!

  • Acquia
  • Ellucian
  • Preservica
  • Solodev
  • Splunk
  • TechnologyOne

Do you want to become an AWS Education Competency Partner?

Learn more about the requirements to join the AWS Education Competency here. And read the AWS Public Sector Blog on the value of the Education Competency for customers here.

Want to hear more from two of our launch Education Competency Partners? Check out the following videos and learn how Echo360, an Advanced APN Technology Partner, and Infiniti Consulting, an Advanced APN Consulting Partner, are helping support education customers on AWS.

One Year Later: The Value of the AWS Government Competency for APN Partners and Customers

by Kate Miller | on | in AWS Competencies, AWS Public Sector, AWS Summits, Government, Public Sector | | Comments

When we launched the AWS Government Competency at the AWS Public Sector Summit in 2016, our mission was to better enable customers to connect with AWS Consulting and Technology Partners who have demonstrated technical proficiency and proven success in managing government workloads. And at the same time, we strive to help innovative APN Partners with a focus on Government customers and workloads further distinguish their expertise and gain additional visibility within their target customer base.

As we come upon the one year anniversary of the AWS Government Competency launch, we asked a few of our AWS Government Competency Partners to tell us a bit about their missions, why they decided to use and help support customers on AWS, their growth as an APN Partner, and the experiences they’ve had since becoming an AWS Government Competency Partner.

Who are AWS Government Competency Partners and how do they serve the government sector?

With representation around the world, our AWS Government Competency Partner ecosystem is diverse and strives to serve many levels of government. Take Okta, for example, an Advanced Technology Partner who holds not only the AWS Government Competency but also the Security Competency. “Okta’s mission is to enable any organization to use any technology, improving the connections between people and technology to make every organization more productive and secure,” explains Allen Clark, VP of Partnerships, Okta. “With a best-of-breed identity solution, we help government entities securely adopt new technologies, protect their employees, and engage with constituents logging into websites and portals. Today we serve all levels of government in the United States and around the world, from the Centers for Medicare and Medicaid (CMS), the Department of Justice (DOJ) and the Federal Communications Commission (FCC) in the US, to the Peterborough City Council in the United Kingdom.”

AWS Government Competency Partners seek to help government agencies meet their unique requirements and further their mission by taking advantage of the cloud.

“The Smartronix mission is to help our Government clients transform the way they consume and deliver IT services. We deliver cloud solutions that support critical government infrastructure services,” says Robert Groat, EVP, Technology and Strategy, Smartronix (a Premier Consulting Partner and multiple APN Competency holder). “Our advanced expertise in securing and monitoring cloud solutions that meet the rigorous FedRAMP Moderate and High baselines have positioned us to support some of the most demanding and compliance driven customers such as Department of the Treasury, Department of Justice, and Fannie Mae. We primarily work with Agencies seeking the highest levels of security compliance.”

The impact of these APN Partners is far-reaching and focused on helping government agencies working for the betterment of their citizens. “Currently, our most important government customers are the Mexican Secretary of Agriculture (SAGARPA) and the Mexican State of Guanajuato,” says Rene Bravo, CEO of IT Era, an Advanced Consulting Partner. “We helped both customers move from traditional data centers to AWS cloud and we now provide them infrastructure managed services.” “Our primary government customer is the Chilean national government, and we have offices in Chile, Mexico, Peru, Argentina, Colombia, and Ecuador,” explains Carolina Zamorano, AWS Business Manager for Soluciones Orion, a Premier Consulting Partner.

Being an APN Partner has allowed Splunk, an Advanced Technology Partner and Government, Security, and Big Data Competency holder to deploy SaaS quickly and on a global scale. Splunk’s relationship with AWS has been valuable as the company has grown their service – specifically servicing the Public Sector. “All of our customers are important, and Splunk has made investments to address the unique challenges of government agencies,” says Kevin Davis, VP, Public Sector, Splunk. “The AWS Government Competency helps enable Splunk and AWS to support government modernization initiatives by helping our joint customers to securely and efficiently transition their workloads to the cloud. With Splunk, government agencies have a proven solution to accelerate data-driven initiatives and analytics in the AWS Cloud. We are proud that we have been able to accelerate various initiatives for all 15 cabinet level departments, all four branches of the US military, all three branches of the US Federal government and agencies in 43 of the 50 states.”

Why become an APN Partner and use AWS?

For many of our AWS Government Competency Partners, the value of AWS lies in the agility and innovation AWS helps them drive both within their firm and for their end customers.

“We have worked with AWS since our inception in 2011. We chose to focus our solution on AWS because it offered the best product in the marketplace. That was true in 2011, and it is still true today in 2017. We think AWS is exceptional for its speed of innovation. For a successful and large company, the pace is truly amazing,” says Aaron Klein, Co-Founder & COO, CloudCheckr, an Advanced Technology Partner who also holds the Security Competency.

“AWS enables us to be agile — and allows us to focus on our solution while AWS focuses on the infrastructure. But what truly sets AWS apart is innovation: when AWS launches a new service like Amazon EC2 Container Service (ECS), AWS Key Management Service (KMS), and AWS Lambda, we take note and leverage those services to support our growing identity network and better serve our customers,” explains Clark from Okta.

The high standards established for APN Partners and deep support provided by AWS around the world has also been an important motivator for a number of Government Competency holders.

“As a Technology Partner, AWS really sets itself apart with its APN team. Everyone at AWS is supportive and collaborative and they have really provided a guiding light for us, especially as we continue to grow at the speed we’ve been growing at,” explains Alex Wong, Director of Marketing, SmartSimple, an Advanced Technology Partner.

“While AWS holds its APN Partners to high standards, AWS also consistently supports and works with those who meet the standards. AWS’ help through the APN and Competency Program has proven invaluable in helping CloudCheckr thrive and grow as a solution and as a company. We would not be where we are today without the benefits of being an APN Partner,” explains Aaron Klein, Co-Founder & COO, CloudCheckr.

And as AWS continues to grow its footprint locally, APN Partners are finding additional benefits. “We joined as an APN Partner at the end of 2012, when AWS was not yet as established in Mexico. We chose AWS because it made business sense to enter the cloud business with the industry leader. This has proven to be a good decision and AWS now has a great staff locally in our markets, Latin America and Spain,” says Rene Bravo, CEO of IT Era.

What’s the value in holding the AWS Government Competency?

As they’ve grown their businesses on AWS, why exactly have these APN Partners chosen to pursue the AWS Government Competency? The reasons are many, but a few key themes seem to surface: differentiation, credibility, and confidence earned with customers and with AWS.

“The Government Competency from AWS has opened doors for us within organizations and agencies that otherwise might have been a little slower to open. We wear the Government Competency badge as a sense of pride in the work that we’ve done to earn it and the work that we will do in the future to validate it,” explains Craig Atkinson, CEO and co-founder, JHC Technology, an Advanced Consulting Partner.

“The APN takes its Competency designations seriously. This gives credence to the companies that achieve these Competencies and provides a benchmark that differentiates other APN Partners in this space,” explains Robert Groat, EVP, Technology and Strategy, Smartronix. “We want our government clients to know that we are committed to delivering AWS solutions that meet their unique and demanding requirements. As an AWS Premier Partner in the AWS Government Competency Program we have benefited from being able to deliver highly secure, highly available, fault tolerant and innovative solutions that have transformed the way our government customers deliver services to its constituents.”

Becoming an AWS Government Competency Partner can also help APN Partners become more visible to AWS teams who are seeking to help their customers in the government space connect with APN Partners for value-added services and solutions. “We feel that being able to identify ourselves as a holder of the Government Competency serves as a key differentiator in the eyes of AWS, especially as it relates to the rapidly growing AWS Partner Network,” explains Hemant Datta, COO and co-founder, JHC Technology. “Our experiences to earn such recognition lend credence to the discussions with AWS and between ourselves, AWS, and the customer that the decisions and recommendations provided by JHC are in line with our history of delivering value to the government.”

How does the AWS Government Competency help customers on AWS?

The goal of the AWS Competency Program is to help customers identify and connect with APN Partners with proven expertise in a specific solution area, industry, or application. And we set a rigorously high bar for partners seeking the Competency badge. We seek to help customers feel confident that when they engage with a Competency partner, their unique requirements and business needs will be met. This is particularly important in the government space.

“We find being a Government Competency Partner is definitely important to customers, particularly in the federal space. Government users know that the APN thoroughly vets the Competency Partners. Having this verification of competence signals that the solution or service provider is serious and delivers a valuable solution tailored to government needs,” says Aaron Klein, Co-Founder & COO, CloudCheckr.

“Credibility is of great importance in the government sector. The AWS Government Competency gives Government clients, and AWS, more confidence in REAN Cloud and its solutions. The Government Competency gives the customers the ability to easily identify AWS Partners with expertise and skills in working with government customers to deliver mission-critical workloads and applications on AWS,” says Michael Skarzynski, VP, Public Sector Practice, REAN Cloud, a Premier Consulting Partner and multiple Competency holder.

Alex Wong of SmartSimple explains, “Customers in these industries can be assured that they’re working with highly qualified service providers, as all APN Partner products and solutions are validated by internal AWS teams. It also gives them an easy way to find and connect with vendors they can trust.”

Robert Groat, EVP, Technology and Strategy of Smartronix explained that for his firm, the competency designation helps establish that they are committed to meeting the unique requirements of Government clients: “Government clients need to know which APN Partners are committed to meeting their unique requirements. The AWS Government Competency is a public way of acknowledging the APN Partners that have achieved the standards required by the competency designation.”

In Summary…

Wong sums it up nicely when he says, “Participating in the Government Competency Program has been a joy for us at SmartSimple. The visibility the designation gives us both within AWS itself and externally with prospective clients can’t be measured, but the impact is definitely felt. We have a solid working relationship with our AWS contacts and we greatly appreciate how attentive they are to our needs, and how collaborative they are in helping us move the needle as we continue to grow.”

Meet Government Competency Partners at the AWS Public Sector Summit!

A number of AWS Government Competency Partners are sponsoring the AWS Public Sector Summit. We encourage you to meet this elite group of APN Partners and learn more about how they can help you in your journey to the AWS Cloud.

Would you like to learn more about becoming a AWS Government Competency Partner? Click here to learn about the benefits and the requirements of becoming a member of the AWS Government Competency program. And find all of our current AWS Government Competency Partners here.

Are you looking to explore APN Partner success stories in the Government, Education, and Nonprofit sectors? We’ve got you covered! Visit our new case study page to discover stories from around the world.

Simplify Your Customer Engagement with AWS and Salesforce Heroku

by Kevin Cochran | on | in APN Technology Partners | | Comments

By Kevin Cochran. Kevin is a Strategic Partner Solutions Architect at AWS. 

Imagine what it would be like to write your web application, deploy it, and be done. No servers, no networking—just the excitement of coding and delivering. Now that’s a developer’s paradise—full control over all of the parts needed to do just that!

Yet, in today’s complex world of IT, bringing all the parts together to make this possible doesn’t always feel much different than putting together a puzzle. There are thousands of different services, and sometimes it can be a little daunting assembling them to make your application sparkle.

It’s now more common place—maybe even expected—for technology organizations to use best-of-breed tools from a number of sources to create an experience like this. But how do you choose?

We’re glad you asked! As CIOs make digital transformation a priority and take advantage of moving to the cloud, one trend has become clear – more and more companies are working together to offer their shared customers a better experience.

From simplifying and expanding customer engagements with high-impact results to running your IT infrastructure and resources with confidence, Salesforce and AWS are committed to helping tackle the challenges common to today’s business. With the combination of Salesforce and AWS, IT and business stakeholders have access to solutions that can help them provide a superior level of service to their customers.

“We’re thrilled about our great collaboration with AWS,” said Adam Gross, Heroku CEO, Salesforce. “Leveraging AWS with Salesforce Heroku is one of the easiest and most powerful ways for companies to create innovative, new connections with their customers.”

“The relationship between Salesforce and AWS is both broad and deep,” said David Wright, GM of Strategic Technology Partnerships at AWS.  “Salesforce Heroku developer tools run on AWS and allow developers easy access to AWS services.  This makes it easy for our joint customers to build applications that access data stored on AWS and leverage next-generation AWS services such as Amazon Connect, AWS IoT, and Amazon Redshift.”

Driving Customer Success Together

AWS and Salesforce Heroku work together to help customers make sense of it all and simplify the processes needed to make your application successful. With the added value of collaboration between these organizations, the combined services complement one another, resulting in a unified solution. In particular, the Salesforce Heroku platform enables developers to build, run, and operate apps entirely on AWS. By running on this platform, customers take advantage of many AWS benefits, such as scalability, security, and high availability. And Heroku customers can also more easily leverage the power of AWS services, such as data stores, analytics, machine learning, telephony, and more.

Today, we’ll show you how running Salesforce Heroku on AWS can save you time, as well as provide tremendous value to your customers. The best way to illustrate this solution is with a practical example.

A High-Level Illustration

Walking through a simple Heroku web application is probably the best place to start to illustrate how this solution can benefit you. Combining a handful of AWS services, such as AWS Shield, Amazon Route 53, and Amazon S3, with Heroku to manage your contacts in Salesforce can be set up in a relatively short amount of time.

Let’s take a quick look at some of the components we’ll use to make this happen.

Media and Web Security Example

In this example, Amazon Route 53 is directing the main application traffic to the Amazon CloudFront CDN in front of a Heroku app, while directing requests for application assets to the CloudFront CDN in front of Amazon Simple Storage Service (Amazon S3). We’ve created a Web Application Firewall (AWS WAF) rule, which is used by both CloudFront distributions (the application and the site assets).

The Heroku application delivers the requested data, but you may also have it write customer information to your Heroku PostgreSQL database. Using Heroku Connect, the data can be transformed and transferred directly to Salesforce, where you can easily manage your customer data. Heroku Connect can keep your data synchronized. Learn more about Heroku add-ons here.

Each of the components are simple to use and set up, and this scenario could be a great way for you to get started. Let’s take a look at each one of these in a little more detail.

AWS Shield

AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS. There are two tiers of AWS Shield – Standard and Advanced. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.

Amazon Route 53

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. Using Route 53 as your DNS service, traffic can be directed to different destinations based on the address. For instance, www.yourapp.com (http://www.yourapp.com/) can be directed to a Heroku application while static.yourapp.com (http://static.yourapp.com/) can be directed to an Amazon S3 bucket containing images and other site assets.

AWS WAF – Web Application Firewall

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. You can use AWS WAF to create custom rules that block common common attack patterns, such as SQL injection or cross-site scripting. Rules can be created to allow or deny traffic matching certain patterns. You can deploy AWS WAF on either Amazon CloudFront as part of your CDN solution or the Application Load Balancer (ALB) that fronts your web servers or origin servers running on Amazon EC2.

Amazon CloudFront

Amazon CloudFront is a global content delivery network (CDN) service that accelerates the delivery of your site’s content by caching the content. Edge locations are located globally.

Salesforce Heroku Application

Using Salesforce Heroku, your application runs in containers called Dynos. These containers hold all the necessary libraries for your application, and the scaling for your application is managed by Heroku. Heroku also provides integrations for full CI/CD pipelines.

Amazon Simple Storage Service (Amazon S3)

Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web. In my opinion, Amazon S3 is a great option to store static content—whether you need to store site images, javascript or stylesheets, or other downloadable content.

Heroku Connect

The fastest and most convenient way to deliver content from Heroku to Salesforce is using Heroku Connect. Heroku Connect effectively transforms the data stored in your Heroku Postgres database, connects to Salesforce, and transfers the data. It provides two-way functionality, so updates to Salesforce can be reflected in your Heroku application.

Taking It Further

With AWS and Salesforce Heroku you can do more for your business and customers. From securing simple dynamic applications to processing big data, we can help you find a fast path to the solution you need.

For more information on AWS services, visit:

https://aws.amazon.com/products/

For more information on Salesforce, visit:

http://www.salesforce.com

For more information on Heroku, visit:

http://www.heroku.com and https://www.salesforce.com/products/platform/products/heroku/

Join us at the AWS Summit – Chicago

by Kate Miller | on | in AWS Marketing, AWS Summits, AWS Training and Certification | | Comments

What Can You Expect at the AWS Summit – Chicago?

AWS Summits bring together the cloud computing community to connect, collaborate and learn about AWS. This free event, hosted around the world, is designed to educate you on the AWS platform.

Getting trained on AWS is one of the most important steps in the APN Partner journey. AWS training and certifications can help APN Partners develop deeper AWS knowledge and skills to better serve customers. We offer many ways you can learn, train and network at the AWS Summit – Chicago on July 26-27 at the McCormick Place Lakeside Center.

Bootcamps


There are five, full-day training bootcamps happening on July 27.

AWS Technical Essentials: Audience Level: Introductory

AWS Technical Essentials is a one-day, introductory-level bootcamp that introduces you to AWS products, services, and common solutions. Learn More.

Note that due to popularity we are offering two bootcamps of ‘AWS Technical Essentials’.

Secrets to Successful Cloud Transformation: Audience Level: Introductory

Secrets to Successful Cloud Transformations is a one-day, introductory-level bootcamp that teaches you how to select the right strategy, people, migration plan, and financial management methodology needed when moving your workloads to the cloud. Learn More.

Note: This course focuses on the business, rather than the technical, aspects of cloud transformation.

Building a Serverless Data Lake : Audience Level: Advanced

Building a Serverless Data Lake is a one-day, advanced-level bootcamp designed to teach you how to design, build, and operate a serverless data lake solution with AWS services. Learn More.

Running Container-Enabled Microservices on AWS: Audience Level: Expert

Running Container-Enabled Microservices on AWS is a one-day, expert-level bootcamp that provides an in-depth, hands-on introduction to managing and scaling container-enabled applications. Learn More.


Get AWS Certified


For the first time at the AWS Summit—Chicago, we are offering AWS certification exams. Learn More.

Learning


There will be more than 40 learning opportunities for you to take advantage of: services intros, new service deep dives, solutions, and tech talks. Discover more about the exciting opportunities to learn.

  • Sessions — One-hour breakout sessions led by AWS subject matter experts and/or top AWS customers. You’ll find introductory sessions and deep dive technical content.
  • Workshops — Two hour small-scale, hands-on session that provides a more tangible way to learn and maximize networking.
  • Game Day — Four hour digital adventure game where you and your teammates take on explosive traffic growth, ever changing code deployments, internal and external threats, and the challenge of working together as a team to keep your infrastructure online and your customers happy.
  • Security Jam — Four hour Jam, consisting of a series of challenges related to incident response, forensics and security automation.
  • Hands-on Labs — Evaluate and learn how to work with AWS solutions through step-by-step self-paced instructions on an AWS Console in a live practice environment.

Networking at the HUB


Join your fellow AWS Summit attendees in the The HUB — a high-energy expo with top AWS technology and consulting partners, and access to AWS engineers and architects. Learn more.

We hope you will join us at the AWS Summit – Chicago. Learn more and register today