AWS Startups Blog

Descomplica: why a social good startup migrated to AWS

by Andrea Li | on | in Startup Migration |

The social good sector focuses on creating a positive impact on an individual or society. Over the last 10 years, we’ve seen a 20% growth in the nonprofit sector alone. It’s not news that the social good sector is growing rapidly.

Lots of startups in this sector are leveraging innovative technology to solve our greatest challenges, but at the same time, they face a lot of unique challenges. These startups often have to manage a large user base that is not necessarily active at all times, but can spike abruptly based on current affairs. Another challenge is managing a magnitude of international volunteers who come-and-go: these users need access to documents with different confidentiality levels, across multiple networks and devices, from varying locations around the world. Accounts may remain dormant for years before a sudden need arises to access extremely sensitive donor information.

Access, permission models, and data storage are just a few of the problems that companies in the social sector face. It takes time to understand the challenges and even longer to solve them. Some common barriers we hear against cloud adoption are high migration costs, suspicion of true data security, and the lack of technical know-how.

The situation is complicated with no one-size-fits-all solution. That’s why AWS is very excited to see a growing number of social good startups trust and pick us to support their technology. In this post, we dive a little deeper into why Descomplica, a Brazilian EdTech startup, decided to migrate to AWS from another cloud platform.

Descomplica is based in Rio de Janeiro. Their mission is to make learning frictionless and easy. Having realized that high school students are tech-savvy, mobile, and constantly online, Descomplica has built out a thorough education platform that gives students content (like study plans and course materials) and different ways to consume the material (like live-stream content and SMS-based study tools). Descomplica raised more than $14M, attracting venture firms like Social Capital and Valor Capital.

Descomplica’s platform is in high demand. Brazil is the third largest market for social networks after the US and India. The company scaled very quickly and now has a library of 15,000+ videos with over 8 million streams every month. Initially, the significant increase in the number of users and amount of content being loaded caused a lot of unexplained crashes, which made the platform increasingly unreliable from a customer standpoint. Apart from complications with user management systems and billing, the lack of documentation and resources also made it difficult for Descomplica’s team to build sustainably for quick growth.

To fix these problems, Descomplica partnered with AWS to migrate all their services to the AWS platform. The migration took only one month. Descomplica was able to automate the entire deployment of their application using their ongoing integration services, and they acquired a new, reliable system of user permissions and access keys. They chose AWS because of our ease-of-use, deployment, and support. They stayed with us because of our reliability.

As we know, every minute and dollar that a social good startup invests in technology is time and money that’s not put towards their cause. We understand this at AWS and therefore partner very closely with all our customers to architect properly, securely and cost-efficiently. Our goal is to take care of the technological front so startups like Descomplica can focus on their true mission: delivering high-quality education to students who can’t afford top educational institutions. And that is why social good startups are moving to AWS.

Fighting Off the Bad Guys with Stealth Security

by Alexander Moss-Bolaños | on | in Startups On Air |

This post is part of the Startups on Air series. Startup Evangelist Mackenzie Kosut visits different startups and learns who they are, what they do, and how they use AWS.

Michael Barrett, CEO and co-founder of Stealth Security, joined PayPal in 2006 as its first Chief Information Security Officer where he built an award-winning security organization that defended the company’s website and digital infrastructure against cyberattacks and threats. By 2013 he noticed a consistent pattern; He wanted to buy security products, but couldn’t because they simply didn’t exist on the market. At the same time, he noticed that new more sophisticated types of automated web attacks were evading his traditional security tools and taking more and more of his team’s time and resources. Uncertain what the ideal solution would be, but sensing the market opportunity, he left PayPal and began building a world class founding team from some of the largest payments and security technology firms.

Stealth Security helps companies proactively defend their online businesses and customer data from automated attacks that evade traditional security and anti-fraud tools, such as credential verification, fake account creation, content theft and scraping, and web DDoS. Its next generation WAF is an enterprise-class solution that is purpose-built for protecting websites, mobile apps, and enterprise APIs from all types of automated attacks and unwanted traffic. It is also the industry’s first solution that can dynamically adapt to new attack patterns. Using real-time network traffic analysis, behavioral analytics, real-time threat intelligence, and machine learning, it accurately detects and mitigates attacks with no effect on legitimate user traffic.


Technical Recap:

“It’s a natural fit for anyone who is running their infrastructure in AWS.”

-Nikunj Bansal (Principal Engineer), @nikunj_stealth

Stealth Security is a deployed solution that can also run within AWS. In fact, they do all of the building and testing of their solution in AWS. “It’s a natural fit for anyone who is running their infrastructure in AWS,” proclaimed Nikunj Bansal, principal engineer at Stealth Security who referred to the process as a “very seamless integration.”

Let’s talk environment. Stealth Security runs on Amazon EC2. Additionally, they heavily rely on Amazon S3 and Docker for their storage and container needs. Looking ahead, Stealth Security is trying to move toward a solution that is more deeply integrated with AWS, so that anyone who is using services like Amazon CloudFront can use their solution right away without having to make any big configurations. Moving forward, they are preparing to switch to Lambda, as it would increase their compute efficiency and give them virtually infinite scalability.


Michael Barret, CEO and founder of Stealth Security, follows his own triangle model of website security when it comes to protecting a website’s traffic. He has broken down the world of protecting web services into three layers, or types of attacks:

1.     Infrastructure

2.     Syntactic

3.     Semantic

Barret believes that understanding the nature of a web transaction should directly influence the correct course of action. First, you need to understand the intent. Then, you determine if the transaction is automated or not. Next, you figure out the nature of the transaction. Finally, you  decide on the course of action.

Interested in learning more about Stealth Security and their ways of cyber security? You can check out their website here and follow them on Twitter here.

Talking Health and HIPAA Compliance with Wellpepper

by Alexander Moss-Bolaños | on | in Startups On Air |

This post is part of the Startups on Air series. Startup Evangelist Mackenzie Kosut visits different startups and learns who they are, what they do, and how they use AWS.


Wellpepper is a platform for digital patient treatment plans that helps patients to follow instructions, empowers them to self-manage their healthcare outside the clinic, and connects them to their healthcare providers if they need additional support.

Co-founders Anne Weiler and Mike Van Snellenberg identified the problem of a lack of continuity of care when Anne’s mom contracted a rare autoimmune disease. After six months in the hospital, she was discharged with no instructions and had to wait over a month for a follow-up visit. This lack of continuity of care at such a crucial time was the impetus for Wellpepper.

Wellpepper solves this problem with actionable care plans (having drawn insights from over 250,000 patient actions), that are built from reusable building blocks. Patient instructions are broken down into simple tasks, educational materials, and custom video, which enable patients to record their experiences. Healthcare organizations can track patient results in real-time against their own best practices and protocols.


Technical Recap:

“One of the nice things about using AWS in a HIPAA model is that it’s a shared responsibility model.”

-Mike Van Snellenberg (CTO & Co-Founder)

Wellpepper wraps all of their services within a virtual private cloud (VPC), and use many of the AWS services that are HIPAA-eligible, such as EC2S3, and EBS. They leverage an ELB elastic load balancer, which handles SSL termination for public traffic. Their app tier, which is written in Node.js, serves dynamic content and runs APIs and application services. Their static web tier houses their content and portals. As far as deployments go, Wellpepper is in the midst of migrating to AWS CodeDeploy  and AWS CloudFormation.

“One of the nice things about using AWS in a HIPAA model is, that it’s a shared responsibility model”, exclaimed Mike, who uses it over un-encrypted files in the back-end and manually encrypted files in the static web tier. He went on to explain how easy it is replicate from shared tendency over to dedicated tendency (minus their web tier).

Wellpepper recently underwent a successful HIPAA audit and shared some of the steps they take to secure their environment; In terms of a shared security model, AWS manages from the hypervisor down to the physical facility, and it is on us to build security into our application. Wellpepper utilizes a simple password based authentication and OAUTH.

For those of you who are new to HIPAA compliance, Mike has a few reassuring words: “HIPAA is not as scary as you’d think. It’s just a lot of general, good security practices.” That said, he recommends getting comfortable with HIPAA before diving into a serious project.

There are a lot of good services in AWS that you can leverage that make architecting and scaling your infrastructure easy. For example, when you encrypt your EBS volumes, you’ve technically met your encryption requirements for data at rest. If you need services that currently aren’t HIPAA-eligible, you can still run your own compliant instances on EC2 with encrypted EBS.

Interested in learning more about Wellpepper and staying up to date with their latest endeavors? Follow them on Twitter here.

Using Amazon Rekognition to enhance MacOS Finder Tags

by Mackenzie Kosut | on | in Guides & Best Practices |

Sunday morning I was looking at a large folder on my laptop containing hundreds of images. Thumbnails are wonderful, but what I really wanted was an easy way to search my folder quickly for photos that contained picture of cliffs.

Screen Shot 2017-02-26 at 9.37.33 AM

Starting in OS X Mavericks, you can use the Tags feature to find tagged files in the Finder window. I wanted to know how difficult it would be to have my laptop send photos to Amazon Rekognition, have each photo analyzed using the deep visual learning of Amazon Rekognition , and then apply these identified objects as tags to my files that I could then open in Finder.

This would give me the ability to search in Finder or Spotlight (a MacOS search feature) by using Tag:<term>. Want to find all of your photos of cats? Tag:Cat would instantly return these results to you.

After finding a snippet of code for the writexattrs function online, it was just a matter of passing the image to Amazon Rekognition, then looping the Tag results and writing them to the file. In about 30 minutes I had 50 lines of code and a working prototype.

The code is available here to play with:

Screen Shot 2017-02-26 at 9.39.41 AM

That worked fine for processing a large folder of images. To improve performance, a team member submitted a pull request which resized the images prior to uploading and runs the process across multiple threads. What I really want is a way to have these images get auto-tagged when they are added to the folder.

Enter MacOS Automator. Automator provides an easy interface to watch for folder activity and run an action when a new file is written. It’s similar to how AWS Lambda can run any time a file is modified in Amazon S3.

This workflow waits for new files to write into the “TagMe” folder, and passes them to the script with the filename as a parameter.

Now for the final test:




The one big realization I had while playing with this hack is that AWS can be used to extend the capabilities of virtually anything. Here I have a simple underpowered laptop, yet I’m able to augment the capabilities of it by tapping into the enormous deep learning of Amazon Rekognition to visually inspect my images. All possible with a minimal amount of code!

Launch your app with the AWS Startup Kit

by Brent Rabowsky | on | in Guides & Best Practices |

The AWS Startup Kit is a set of resources designed to accelerate startups’ product development on AWS. A core component of the Startup Kit is a set of well-architected sample workloads that can be launched within minutes. These workloads, which reflect best practices for reliability, networking, and security, are supported by AWS CloudFormation templates and code published on GitHub. They easily can be extended to create a wide variety of applications.

A previous Startup Kit blog post, Building a VPC with the AWS Startup Kit, described templates to create fundamental cloud infrastructure building blocks including an Amazon Virtual Private Cloud (VPC), a bastion host, and a relational database. This blog post presents an app template that launches a sample app in AWS Elastic Beanstalk. All Startup Kit templates are available on GitHub at When the app template is used with the VPC, bastion, and database templates, together they create the following architecture:


Before you work with the templates, you should read the previous blog post and be familiar with basic VPC concepts. If not, please refer to the blog post referenced above and the VPC documentation.

A forthcoming YouTube “how to” video will present a walkthrough of the process of using the templates. In the meantime, the README in the GitHub repository has detailed template usage instructions. To test out your VPC and database setup on AWS, or as a starting point for your own projects, you can deploy the Startup Kit Node.js sample workload using the app template. This Startup Kit sample workload is available on GitHub at However, it is not necessary to use a Startup Kit sample workload with the app template (for details, see the end of this post).*

Managing your app with Elastic Beanstalk

There are many different ways to deploy applications on AWS. However, the simplest deployment method is to use AWS Elastic Beanstalk, a complete application management service. Elastic Beanstalk supports many different technologies and stacks, including multi and single container Docker, Node.js, Ruby, Python, Java, Go, and .NET.  See the documentation for a complete list of supported platforms.

Here are some of the benefits of using Elastic Beanstalk:

  • Automatically handles capacity provisioning, load balancing, and application health monitoring.
  • Automatically scales your application based on your application’s specific needs using easily adjustable Auto Scaling settings.
  • Keeps the underlying platform running your application up-to-date with the latest patches and updates.
  • Provides the freedom to select the AWS resources, such as an Amazon EC2 instance type, that are optimal for your application.

After you launch an app with the app template, you might find you need to adjust some of the Elastic Beanstalk configuration parameters provided by the template. For example, you might decide to increase the size of your Auto Scaling group to handle increased traffic as your user base grows. You can change these configuration parameters through the Elastic Beanstalk console. Simply go to the dashboard for your Elastic Beanstalk environment, and then click Configuration in the left pane. On the Configuration page, there are options to change parameters for scaling, instance type, environment variables, and more.

Other Startup Kit resources

We plan to expand the Startup Kit in the near future with additional resources, such as sample workloads for Ruby on Rails, Spring Boot, and other stacks. Here’s a list of the resources that have been published so far:

Stay tuned for the next Startup Kit release!


Special thanks to Itzik Paz for providing the architecture diagram at the top of this post


* How to integrate an app other than a Startup Kit sample workload:  If you want to launch your own app with the app template, your app should be based on one of the technologies currently supported by the template: Node.js, Ruby on Rails, and Java (Spring Boot). Also, database parameters are managed by environment variables in the app template, so your app’s database parameters should conform to the naming conventions in the template. Alternatively, you can fork the GitHub repository and modify the app template’s environment variables section to change the names as needed. Instructions for writing and modifying CloudFormation templates are beyond the scope of this post, but the app template has been written to clearly identify the relevant sections and to be as self-explanatory as possible. The CloudFormation documentation is comprehensive in case you need further guidance. If you’re not using Spring, you can delete the Spring environment variables section; likewise, if you’re using Spring, you can delete the environment variables section for Node.js and Rails.

AWS in Healthcare and Life Sciences at HIMSS

by Alexander Moss-Bolaños | on | in Events, Startups On Air |

Heading to HIMMS17Last week we had the chance to catch up with some of our customers at HIMSS in Orlando, Florida. In 2015, healthcare made up 17.8% of the U.S GDP, which equates to roughly $3.2 trillion in spending (up 5.8% from the previous year). With the advent of cloud technology in this space, we are beginning to see how companies of all sizes are using AWS to provide superior analytics, integration, and services to customers all around the world. Whether your company is made up of 2 people, or 20,000, you can use the cloud just the same by scaling to meet any of your business needs.

This year’s HIMSS brought in over 40,000 participants. We had the opportunity to catch up with 16 of our customers at their booths, and learn all about what they’re working on within the cloud computing space.

Following is a list of the companies we visited. Follow @awsstartups and @awsonair to stay up to date with upcoming AWS on Air videos where our Global Startup Evangelist, Mackenzie Kosut, will be visiting Singapore, London, Portugal, Spain, Tokyo, and more to check in with some of the most cutting- edge startups in the world.

Re-live HIMSS


  • With Orion Health talking about Amadeus, a comprehensive approach to acquiring, measuring, analyzing, and presenting actionable clinical and claims data, as well as non-traditional data powered by AWS.
  • At Connectria, a “jerk-free” managed services provider who helps run your HIPAA compliant applications on AWS by providing performance monitoring, security, compliance, cost optimization, and more.
  • Live with Fortinet, who is helping companies manage complexity and security when it comes to digital transformation, IoT, and cyber security.

We interviewed over 20 Startups in Tel Aviv and here’s what we found

by Rei Biermann | on | in Events, Startups On Air |

Last week we had the opportunity to meet with over 20 startups in Tel Aviv. You may have caught our recent blog post “The Israeli Recipe”  where we shared a look into this thriving startup ecosystem. With 5,720 technology companies being responsible for 45% of their GDP, it’s no surprise these companies are building in every imaginable industry. We had the great fortune to sit down with some of these founders, hear about what they are building, and find out how AWS is helping to power their innovation.
Here is an aggregate list of the startups visited. Stay tuned on @awsstartups for upcoming Startups On Air videos where our Global Startup Evangelist Mackenzie Kosut will be visiting London, Singapore, China, Spain, Portugal, and more to meet and talk with some of the most exciting startups in the world.


Watch and Learn


  • Live with Moovit discussing Amazon Redshift, Amazon Kinesis, Amazon Simple Queue Service, Amazon Elastic File System and more!
  • Catching up with Spotinst about intelligent workload management to help to reduce your Amazon Elastic Compute Cloud costs by 80%.
  • Catch up with Cloudinary who provides cloud based image and video management leveraging AWS Lambda, Amazon Athena, Amazon Aurora and more!
  • Watch as Gong demos their ability to automatically record, transcribe, and analyze your sales calls.
  • Yotpo showcases their user & customer generated content platform powered by Amazon EC2 Container Service, Amazon Elastic MapReduce, AWS Lambda and more!
  • LePROMO demoes instant promo video generation powered by Amazon SWF, Amazon EC2 Spot Instances, Amazon EC2 Elastic GPUs, Amazon Elastic Transcoder and more!
  • Talking with JENNY, part of TechStars TLV about building an open source conversational platform and more!
  • Talking with KARD whose maximizing credit rewards programs built with PCI compliance on AWS!
  • Visiting Funnster whose making fun stuff happen on AWS Lambda, Amazon SNS, Amazon EC2 and more!

Building a VPC with the AWS Startup Kit

by Brent Rabowsky | on | in Guides & Best Practices |

The AWS Startup Kit provides resources to help startups begin building applications on AWS. Included with the Startup Kit is a set of AWS CloudFormation templates. These templates create fundamental cloud infrastructure building blocks including an Amazon Virtual Private Cloud (Amazon VPC), a bastion host, and an (optional) relational database. The templates are available on GitHub at They create the following architecture:

vpc-architectureBefore you work with the templates, you should be familiar with basic VPC concepts.  If not, see the VPC documentation. The VPC template is the foundation for everything you build on AWS with the Startup Kit. It creates a VPC with the following network resources:

  • Two public subnets, which have routes to a public Internet gateway.
  • Two private subnets, which do NOT have routes to the public Internet gateway.
  • A NAT Gateway to allow instances in private subnets to communicate with the public Internet, for example, to pull down patches and upgrades, and access AWS services with public endpoints such as Amazon DynamoDB.
  • Two route tables, one for public subnets and the other for private subnets.
  • Security groups for an app, load balancer, database, and bastion host.

The bastion host template creates a bastion host that provides SSH access to resources you place in private subnets for greater security. Resources placed in private subnets could include application instances, database instances, analytics clusters, and other resources you do not want to be discoverable via the public Internet. For example, along with enabling proper authentication and authorization controls, placing database instances in private subnets can help avoid security problems risked by exposing databases to the public Internet.

After you’ve created your VPC and bastion host, you can optionally create a relational database using the database template. Either a MySQL or PostgreSQL database is created in the Amazon Relational Database Service (Amazon RDS), which automates much of the heavy lifting of database setup and maintenance. Following best practices, the database is created in your VPC’s private subnets and is concealed from the public Internet.

A forthcoming YouTube “how to” video will present a walkthrough of the process of using the templates. In the meantime, the README of the GitHub repository has detailed template usage instructions.


Managing Your Infrastructure with CloudFormation

You can manage your infrastructure on AWS entirely via the AWS console.  However, there are many advantages to following an “infrastructure as code” approach using CloudFormation or similar tools.

CloudFormation provides an easy way to create and manage a collection of related AWS resources, allowing you to provision and update them in an orderly and predictable fashion. Here are some of the benefits of using CloudFormation to manage your infrastructure:

  • Dependencies between resources are managed for you by CloudFormation, so you don’t need to figure out the order of provisioning.
  • You can version control your infrastructure like your application code by keeping your CloudFormation templates in Git or another source control solution.
  • You can parameterize your templates so you can deploy the same stack with variations for different environments (test or prod) or different regions.

Over time, you might find you need to add new resources to the existing resources provided by the Startup Kit templates. For example, if you need to run 1,000 or more instances, you will exhaust the IP addresses available in the existing subnets and will need to add more subnets. Add new resources by modifying the templates and committing the changes in your source control repository, rather than making changes through the AWS console. This makes it easier to track the changes and roll them back if necessary.

For details about the capabilities of CloudFormation and how to write templates, see the CloudFormation documentation. You can declare resources using a straightforward YAML (or JSON) syntax. For example, the following snippet from the VPC template shows the simple syntax for creating the top-level VPC resource. As used in the snippet, CloudFormation’s FindInMap and Ref functions enable dynamic lookup of the CIDR block for the VPC and the name of the VPC stack, respectively:

    Type: AWS::EC2::VPC
      CidrBlock: !FindInMap [CIDRMap, VPC, CIDR]
      EnableDnsSupport: true
      EnableDnsHostnames: true
      - Key: Name
        Value: !Ref "AWS::StackName"


Connecting to Your Instances and Database

In general, it is best to avoid connecting into your instances by using SSH to manage them individually. Instead, manage your instances using a higher-level management service such as AWS Elastic Beanstalk or AWS OpsWorks. For example, when you need to connect to instances for debugging purposes, you can connect via the bastion host created by the bastion template. One way to do this is to use SSH agent forwarding. For details about how to set this up on your local computer, consult the relevant AWS blog post.

Because the database is in a private subnet, it also is necessary to connect to it via the bastion host using a method such as TCP/IP over SSH. For an example of how to do this with MySQL Workbench, see the relevant documentation and the following screenshot.


In the Manage Server Connections dialog box for your database connection, fill in the following values:

  1. For SSH Hostname, type the public DNS name of your bastion host.
  2. For SSH Username, type ec2-user.
  3. Ignore the SSH Password
  4. For SSH Key File, type the path to the EC2 key pair you created.
  5. For MySQL Hostname, type RdsDbURL from the Outputs tab for the database stack in the CloudFormation console.
  6. For MySQL Server Port, type 3306.
  7. For the Username and Password fields, enter the values you chose when you created the database.


Next Steps

After you’ve created your VPC-related infrastructure with the Startup Kit templates, you can add on top of it applications, analytics clusters, and other components using any technologies of your choice. If you’re building an application such as a web app or RESTful API, Elastic Beanstalk can help automate the process of setting up, managing, and scaling your application.

Whichever technologies you use, be sure to place load balancer resources in the public subnets of your VPC, and spin up application instances in your private subnets. Also, make sure to assign the relevant security group created by the VPC template to each of your components. Check the Outputs tab of the VPC stack in the CloudFormation console for the IDs of the security groups, which are prefixed with sg-. Here’s how the security groups should be assigned:

  • The ELBSecurityGroup should be assigned to load balancers, such as Application Load Balancers or Classic Load Balancers. This security group allows load balancers to talk to application instances.
  • The AppSecurityGroup should be assigned to application instances, such as RESTful API servers or web servers. This security group allows those instances to talk to the database as well as the load balancer, and receive SSH traffic from the bastion host.

Besides the fundamental infrastructure templates discussed in this blog post, the Startup Kit also includes the Startup Kit Serverless Workload. Watch out for more Startup Kit information in the near future!


Special thanks to Itzik Paz for providing the architecture diagram at the top of this post

Bringing art to Amazon Alexa on AWS Lambda

by admin | on | in Featured Guests |

Guest post by Daniel Doubrovkine, CTO, Artsy

At a recent Artsy board meeting an investor asked, “You’ve shipped a tremendous amount of software in 2016 with a very small team. How did you do that?”

Indeed, in 2016 we’ve built a new live auctions service and executed over 40 auctions for major auction houses, including Phillips and Heritage. We’ve simultaneously grown our online publishing business to become the most read art publication in the world. And we’ve more than doubled the number of gallery partners on the platform, all while seeing fairly moderate growth of operational costs.

This progress is the result of combining great people, an exceptionally efficient organization, and a systematic approach to creating experiments with the breadth, depth, and virtually unlimited power of AWS. Practically, this means that we evaluate and adopt at least one major new framework with each non-critical project. We develop small ideas as opportunities to learn, and often graduate these into production workloads. For example, last year we tried and have now fully adopted Kubernetes with Amazon ECR. And today we’re exploring AWS Lambda for mission-critical applications after first trying to load data from Amazon S3 to Amazon Redshift to then shipping an Alexa skill that turns the little voice-activated device into an art historian.

In this post, I walk you through developing, deploying, and monitoring a Node.js Lambda function that powers Artsy on Alexa. We implement an Alexa skill that runs on a development machine, deploy the code to AWS Lambda, and enable it on an Alexa device, the Amazon Echo. You can find the complete source code for this process on Github.

First, a bit of context about the Amazon Echo. The device contains a built-in speech recognizer for the wake word, so it’s always listening. After it hears the wake word, a blue light ring turns on, and it begins transmitting the user’s voice (called an utterance) to the Alexa platform that runs on AWS. The light ring indicates that Alexa is “listening.” The Alexa cloud service translates speech to text and runs it through a natural language system to identify an intent, such as “ask Artsy about”. The intent is sent to a skill (a Lambda function) that generates a directive to “speak” along with markup in SSML format, which is transformed into voice and sent in WAV format back to the device, to be played back to the user.

To get started, you need an Amazon Apps and Services Developer account and access to AWS Lambda.


Designing an intent


To get Alexa to listen, you first design an intent. Each intent is identified by a name and a set of slots. The intents have to be simple and clear and use English language words or predefined vocabularies. I started with a simple “ask Artsy about an artist” intent, which takes an artist’s name as input:

   "intents": [
         "intent": "AboutIntent",
         "slots": [
               "name": "VALUE",
               "type": "NAME"

The only possible sample utterance of this intent is “about {VALUE}”. The “ask Artsy” portion is implied.

Alexa supports several built-in slot types, such as “AMAZON.DATE” or “AMAZON.NUMBER”. Because Alexa cannot understand artists’ names out-of-the-box, we had to teach it with a custom, user-defined slot type added to the Skill Interaction Model with about a thousand of the most popular artists’ names on Artsy.



Implementing a skill


Intents are the API of a skill, otherwise known as an Alexa app. Using the open-source alexa-app library from the alexa-js community makes implementing intents easy.

In the following code, we define a “launch” event that is invoked when the skill is launched by the user (for example, “Alexa, open Artsy”). The launch event is followed by the “about” intent that we described earlier:

var alexa = require('alexa-app');
var app = new'artsy');

app.launch(function(req, res) {
        // welcome message
        .say("Welcome to Artsy! Ask me about an artist.")
        // don't close the session, wait for user input (an artist name)
        // and provide a re-prompt in case the user says something meaningless
        .shouldEndSession(false, "Ask me about an artist. Say help if you need help or exit any time to exit.")
        // speak the response

app.intent('AboutIntent', {
        "slots": {
            "VALUE": "NAME"
        "utterances": [
            "about {-|VALUE}"
    function(req, res) {
      // intent implementation goes here

The skill expects a slot value, which is the artist’s name.

var value = req.slot('VALUE');

if (!value) {
  return res
    .say("Sorry, I didn't get that artist name.")
    // don't close the session, wait for user input again (an artist name)
    .shouldEndSession(false, "Ask me about an artist. Say help if you need help or exit any time to exit.");
} else {
  // asynchronously look up the artist in the Artsy API, read their bio
  // tell alexa-app that we're performing an asynchronous operation by returning false
  return false;

We use the Artsy API to implement the actual skill. You can refer to the complete source code for implementation details. There’s not much more to it.


Organizing code


Although the production version of our skill runs on Lambda, the development version runs in Express, using a wrapper called alexa-app-server. It automatically loads skills from subdirectories with the following directory structure:

+--- server.js                  // the alexa-app-server host for development
+--- package.json               // dependencies of the host
+--- project.json               // lambda settings for deployment with apex
+----functions                  // all skills
     +--artsy                   // the artsy skill
        +--function.json        // describes the skill lambda function
        +--package.json         // dependencies of the skill
        +--index.js             // skill intent implementation
        +--schema.json          // exported skill intent schema
        +--utterances.txt       // exported skill utterances
        +--node_modules         // modules from npm install
+--- node_modules               // modules from npm install

The server also neatly exports the express.js server for automated testing:

var AlexaAppServer = require('alexa-app-server');

    port: 8080,
    app_dir: "functions",
    post: function(server) {
        module.exports =;


Skill modes


The skill is mounted standalone in AWS Lambda and runs under alexa-app-server in development. It decides what to do based on process.env['ENV'], which is natively supported by Lambda:

if (process.env['ENV'] == 'lambda') {
    exports.handle = app.lambda(); // AWS Lambda
} else {
    // development mode
    // http://localhost:8080/alexa/artsy
    module.exports = app; 


Automated testing


A Mocha test can use the Alexa app server to make an HTTP request using intent data. It expects well-defined SSML output:

chai = require('chai');
expect = chai.expect;

var server = require('../server');

describe('artsy alexa', function() {
    it('tells me about Norman Rockwell', function(done) {
        var aboutIntentRequest = require('./AboutIntentRequest.json');
            .end(function(err, res) {
                var data = JSON.parse(res.text);
                var ssml = data.response.outputSpeech.ssml;
              expect(ssml).to.startWith('<speak>American artist Norman Rockwell ');


Lambda deployment


The production version of the Alexa skill is a Lambda function without the development server parts.

We created an “alexa-artsy” function with a new AWS IAM role, “alexa-artsy”, in AWS Lambda. We copied and pasted the role URN into “project.json”. This is a file that is used by Apex, a Lambda deployment tool (curl | sh) along with awscli (brew install awscli). We had to configure access to AWS (aws configure) the first time, too.


To connect the Lambda function with an Alexa skill, we added an Alexa Skills Kit trigger.


We also configured the Service Endpoint in the Alexa Skills Kit configuration to point to our Lambda function.


To deploy the Lambda function, we chose apex. You can use apex deploy and test the skill with apex invoke. This workflow creates a new function version every time, including a copy of any configured environment variables. A certified production version of the Alexa skill is tied to a specific version of the Lambda function, which is quite different from a typical server infrastructure. You have all the versions of a given function available at all times and accessible by the same URN.

Logs didn’t appear in Amazon CloudWatch with the execution policy created by default. I had to give the IAM “alexa-artsy” role more access to “arn:aws:logs:::*” via an additional inline policy:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:logs:*:*:*"

The logs contain console.log output. The logs are versioned, timestamped, and searchable in CloudWatch.





You can use to test your skill, or you can use an actual Echo device, such as an Echo Dot. Test skills appear automatically in the Alexa configuration attached to your account and are automatically installed on all devices configured with it.


You can also enable the production version of the Artsy skill on your own device. Ask Alexa to “enable Artsy”.




In this post, we showed how to combine familiar Node.js developer tools and libraries, such as express.js and Mocha, to build a development version of a complete system. We then pushed the functional parts of it to AWS Lambda. This model works very well and removes all the busy work or headaches associated with typical server infrastructure. At Artsy, we now plan to look at larger systems that expose clearly defined APIs or perform tasks on demand, and attempt to decompose them into simpler moving parts that can be deployed and developed in a similar manner.


Find out more on the Artsy Engineering blog or follow me on Twitter.

Optimizing your costs for AWS services: Part 1

by Roshan Kothari | on | in Guides & Best Practices |

Since its inception in 2006, AWS has been committed to providing small businesses and developers the same world-class infrastructure and set of tools and services that itself uses to handle its massive-scale retail operations. In part one of this series on costs, we highlight how you can optimize your budget by using AWS in highly cost-effective ways.

Over the years, we’ve seen plenty of new businesses start and succeed on AWS. For example, Airbnb, Pinterest, and Lyft started as small, innovative businesses, and scaled over time by using AWS resources on demand. AWS strives to provide as many services and tools as possible to help our customers succeed. Customer feedback drives 90-95% of our roadmap.

Stakes for startups are high, so to extend the runway startups must develop their products with minimal spending. At AWS, our economies of scale allow us to pass on the savings to our customers. As a result, we’ve had 59 price reductions since 2006. We now offer more than 90 services, and we work continually with our customers to better understand their workloads so we can recommend best practices and services.

Best practices in the selection and usage of computing services

Consider the following recommendations when assessing the computing services best suited to your needs.

AWS Lambda

AWS Lambda has proven to be a highly useful service for startups because it provides continuous scalability and requires zero administration. There is no requirement to provision or manage servers. Customers are charged based only on the number of requests, the amount of time the code runs, and the memory allocated. With AWS Lambda, customers can run virtually any type of application or backend service.

For some early-stage startups, the AWS Free Tier often is sufficient enough to launch their initial minimum viable product (MVP).

We recommend AWS Lambda for the following:

Workload example
The free tier includes 1M free requests and 400,000 GB-seconds of compute time per month. Thereafter, AWS Lambda is billed at $0.20 per 1 million requests and $0.00001667 for every GB-second used.

For example, if you allocate 128 MB of memory to a Lambda function, execute it 20 million times in one month, and run it for 200ms each time, it would cost you $5.46 per month. For more information, see the detailed calculations on the AWS Lambda Pricing page.

Startups in early stages will not incur any charges if their requests and compute times don’t exceed free-tier usage.

AWS Case Study: Bustle Deck_Lightbulb
Check out how – a news, entertainment, lifestyle, and fashion website catering to women has experienced approximately 84% cost savings by using serverless services like AWS Lambda.

Amazon Elastic Compute Cloud (Amazon EC2)

Amazon EC2 provides resizable computing capacity in the cloud. You have complete control over your computing resources and can spin up resources within minutes, allowing you to scale quickly. You can just as quickly scale down your resources when you don’t need them.

Because AWS has over one million active customers, ranging from solo developers to the largest companies in the world, we offer several different ways to acquire compute capacity that fit different use cases. Some of the common ways our customers are acquiring the compute capacity include:

  • Reserved Instances – Where you reserve instances for a term of one or three year and get a significant discount.
  • Spot Instances – Where you bid for unused EC2 capacity and can run as long as it’s available and your bid is above the Spot market price.
  • On-Demand Instances – Where you pay by the hour for the instances you launch.

Reserved Instances Standard Reserved Instances provide a discount of up to 75% compared to On-Demand Instances. Reserved Instances could be a good choice for you if you have stable and predictable traffic, or if you know you will need your instances for at least a year. As another benefit, you can apply Reserved Instances to a specific Availability Zone, enabling you to launch the instances when you need them.

You can choose from the following payment options depending on the term and instance type:

  • All Upfront
  • Partial Upfront
  • No Upfront – popular among startups because it doesn’t require upfront capital

You also can list Reserved Instances that you don’t need any more for sale at the Reserved Instance Marketplace if more than a month’s term usage is left. Buying Reserved Instances from the Marketplace often results in shorter terms and lower prices.

You also can get Convertible Reserved Instances, which are available for a three-year term. They provide a discount of approximately 45% compared to On-Demand Instances and offer flexibility in changing instance attributes. The conversion requires new instances to be of the same or greater value.

We recommend Reserved Instances for the following:

  • Instances that must be online all the time and have steady or predictable traffic
  • Any baseline usage, while using On-Demand or Spot Instances for bursts
  • Applications that might require reserved capacity
  • Customers who can commit to using EC2 over a one-year or 3-year term

As an example, the following table shows approximate pricing for an m4.large Reserved Instance in the US East (N. Virginia) Region as of January 9, 2017. The pricing example assumes that the instance is online for one year.

Standard One-Year Term

Payment Option



Effective Hourly

Savings over


No Upfront





$0.108 per Hour

Partial Upfront





All Upfront





  • On-Demand – $946/yr.
  • No Upfront Standard one-year term – $648/yr.
  • Partial Upfront Standard one-year term – $551/yr.
  • All Upfront Standard one-year term – $541/yr.

The following table shows approximate costs if the same instance is online continuously for three years.

Standard Three-Year Term

Payment Option



Effective Hourly

Savings over


Partial Upfront





$0.108 per Hour

All Upfront





  • On Demand – $946/yr.
  • Partial Upfront Standard three-year term – $376/yr.
  • All Upfront Standard three-year term – $354/yr.

The following table shows approximate costs if the instance is online continuously for three years, and you want the flexibility to change instance attributes.

Convertible Three-Year Term

Payment Option



Effective Hourly

Savings over


No Upfront





$0.108 per Hour

Partial Upfront





All Upfront





  • On-Demand – $946/yr.
  • No Upfront Convertible three-year term – $586/yr.
  • Partial Upfront Convertible three-year term – $502/yr.
  • All Upfront Convertible three-year term – = $487/yr.

AWS Case Study: DropcamDeck_Lightbulb
Check out how Dropcam is saving around 67% on costs by using Reserved Instances.

Spot Instances – If you run a workload that isn’t time sensitive, then Spot Instances could be beneficial for you. These instances allow customers to bid on unused EC2 computing capacity for up to a 90% discount compared to On-Demand Instance pricing. Spot
can be used until the bid expires or the Spot Market Price is above the bid. The Spot Market Price is set by Amazon EC2 and fluctuates periodically depending on the supply and demand of Spot Instance capacity.

We recommend Spot Instances for the following:

  • Workloads that aren’t time sensitive and could be interrupted
  • Hadoop/MapReduce type jobs with flexible start and end times
  • Testing software or websites: load, integration,canary, and security testing
  • Transforming videos in different formats
  • Log scanning or simulations, typically performed as batch jobs
  • Simulations ranging from drug discovery to genomics research

AWS Case Study: GettDeck_Lightbulb
Check out how Gett, an Israeli-based startup that connects people with taxi drivers, is saving $800,000 annually by taking advantage of Amazon EC2 Spot Instances.

AWS Case Study: Fugro RoamesDeck_Lightbulb
Check out how Fugro Roames’ use of AWS and Spot Instances have enabled Ergon Energy to reduce the annual cost of vegetation management by A$40 million.

Bidding strategies

  • Refer to historical bidding prices on Spot Bid Advisor while bidding. This could reduce the chances of interruption in case the demand for Spot Instances increases.
  • Use older generation Instances for bidding because their prices are more stable, reducing the chances of interruption, whereas popular Spot Instances tend to have volatile Spot pricing.
  • Use On-Demand pricing as a baseline price, where bidding around that could reduce the chances of interruption. Customers are charged the current Spot price and not the bid price that they set.
  • Bid using Spot Blocks which enable Spot Instances to run continuously for up to six hours at a flat rate, saving up to 50% compared to On-Demand prices.
  • Test applications on different instance types when possible. Because prices fluctuate independently for each instance type in an Availability Zone, it’s possible to get more compute capacity for the same price when you have instance type flexibility.
  • Bid on a variety of instance types to further reduce costs and improve application performance.
  • Use Spot Fleets to bid on multiple instance types simultaneously.

On-Demand Instances – These instances are billed hourly with no long-term or upfront payment. You can add or remove instances from the fleet, so you pay only for the specified hourly rate for the instance type that you choose.

We recommend On-Demand Instances for the following:

  • Testing applications on EC2 instances and terminating them when not in use
  • Any baseline usage, while using additional On-Demand or Spot Instances for bursts
  • Applications with short-term, unpredictable, or spiky traffic that cannot be interrupted
  • Customers who want flexibility to switch between instance types, without upfront payment or commitment

AWS Case Study: AdRollDeck_Lightbulb
Check out how AdRoll is using AWS global infrastructure and services and a combination of On-Demand, Reserved, and Spot Instances to reduce fixed costs by 75% and annual operational costs by 83%.

Overall recommendations:

  • Start with On-Demand Instances and assess utilization for Reserved Instances.
  • For steady state usage pattern, pay All Upfront for 100% utilization.
  • For predictable usage pattern, pay All Upfront or Partial Upfront with a low hourly rate for baseline traffic.
  • For peak times, supplement your capacity with On-Demand Instances.
  • For uncertain and unpredictable usage pattern, start small with On-Demand Instances. If your usage pattern grows and becomes more consistent, switch partially to Reserved Instances or walk away, thereby spending very little for On-Demand Instances for a short period.
  • Use a mix of Reserved Instances, On-Demand Instances, and Spot Instances. This allows you to find the right balance of saving money while keeping your resources online all the time.
  • Use Reserved Instances for baseline usage, and beyond the baseline usage scale using Auto Scaling by launching Spot Instances or On-Demand Instances.


Auto Scaling

Auto Scaling lets you scale EC2 capacity up or down either when you experience spiky or unpredictable traffic, or for predictable or scheduled traffic, ensuring you only add the resources you require, as per the conditions or policies defined. This is essential for new businesses that are unsure of the traffic their application might experience.

We recommend that you start small and let an Auto Scaling Group add more instances automatically if required. Auto Scaling is well suited to applications that have stable demand patterns, and to applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch, a resource monitoring tool, and carries no additional fees beyond the resource costs incurred by services provisioned by Auto Scaling.

You can set Auto Scaling in conjunction with Cloudwatch to choose from many metrics, including average CPU utilization, network traffic, or disk reads and writes by percentage or by constant readjustment. Example policies include the following:

  • If CPU utilization is greater than 60% for 300 seconds, add 30% of the group.
  • If CPU utilization is 80-100% for 300 seconds, add two instances.
  • If CPU utilization is 60-80%, add one instance.
  • If CPU utilization is 0-20%, remove two instances.The minimum number of instances is five.

You can use a combination of Reserved, On-Demand, and Spot Instances in an Auto Scaling group to expand or shrink. You can use Reserved Instances for baseline utilization and configure an Auto Scaling Group to add capacity by launching Spot Instances or On-Demand Instances.

We recommend Auto Scaling for the following:

  • On and off traffic patterns, such as testing or periodic analysis
  • Unexpected growth such as events or applications getting heavy traffic
  • Variable traffic such as seasonal sales or news and media
  • Consistent traffic such as high use of resources at a particular time range each day (for example, during business hours)

AWS Case Study: LyftDeck_Lightbulb
Check out how Lyft is using Auto Scaling to manage up to eight times more passengers during peak times and scale down at other times to stop paying for those resources.

AWS Case Study: Gruppo EditorialeDeck_Lightbulb
Check out how Gruppo Editoriale L’Espresso, an Italian multimedia firm, is using Auto Scaling for unpredictable traffic that sometimes grows as much as 300% if a story breaks.


AWS is constantly working towards reducing prices and, with the economies of scale, passing on the advantages to our customers. With the variety of purchasing options available through AWS, customers are designing their own pricing model plans as per their requirements. At AWS, we are committed to listening and responding to customers. Our commitment is to provide them with an infrastructure that is highly secure, reliable, and scalable, allowing them to focus on the products and features that differentiate them.

What’s Next?

Upcoming in this series will be part 2, discussing the ways to optimize cost for storage and other AWS services. If you’re in NYC and want to learn more about Cost Optimization on AWS then join us at the AWS Loft for Running Lean Architectures on AWS + Office Hours. Register now!