AWS Startups Blog

We interviewed over 20 Startups in Tel Aviv and here’s what we found

by Rei Biermann | on | in Events, Startups On Air |

Last week we had the opportunity to meet with over 20 startups in Tel Aviv. You may have caught our recent blog post “The Israeli Recipe”  where we shared a look into this thriving startup ecosystem. With 5,720 technology companies being responsible for 45% of their GDP, it’s no surprise these companies are building in every imaginable industry. We had the great fortune to sit down with some of these founders, hear about what they are building, and find out how AWS is helping to power their innovation.
Here is an aggregate list of the startups visited. Stay tuned on @awsstartups for upcoming Startups On Air videos where our Global Startup Evangelist Mackenzie Kosut will be visiting London, Singapore, China, Spain, Portugal, and more to meet and talk with some of the most exciting startups in the world.


Watch and Learn


  • Live with Moovit discussing Amazon Redshift, Amazon Kinesis, Amazon Simple Queue Service, Amazon Elastic File System and more!
  • Catching up with Spotinst about intelligent workload management to help to reduce your Amazon Elastic Compute Cloud costs by 80%.
  • Catch up with Cloudinary who provides cloud based image and video management leveraging AWS Lambda, Amazon Athena, Amazon Aurora and more!
  • Watch as Gong demos their ability to automatically record, transcribe, and analyze your sales calls.
  • Yotpo showcases their user & customer generated content platform powered by Amazon EC2 Container Service, Amazon Elastic MapReduce, AWS Lambda and more!
  • LePROMO demoes instant promo video generation powered by Amazon SWF, Amazon EC2 Spot Instances, Amazon EC2 Elastic GPUs, Amazon Elastic Transcoder and more!
  • Talking with JENNY, part of TechStars TLV about building an open source conversational platform and more!
  • Talking with KARD whose maximizing credit rewards programs built with PCI compliance on AWS!
  • Visiting Funnster whose making fun stuff happen on AWS Lambda, Amazon SNS, Amazon EC2 and more!

Building a VPC with the AWS Startup Kit

by Brent Rabowsky | on | in Guides & Best Practices |

The AWS Startup Kit provides resources to help startups begin building applications on AWS. Included with the Startup Kit is a set of AWS CloudFormation templates. These templates create fundamental cloud infrastructure building blocks including an Amazon Virtual Private Cloud (Amazon VPC), a bastion host, and an (optional) relational database. The templates are available on GitHub at They create the following architecture:

vpc-architectureBefore you work with the templates, you should be familiar with basic VPC concepts.  If not, see the VPC documentation. The VPC template is the foundation for everything you build on AWS with the Startup Kit. It creates a VPC with the following network resources:

  • Two public subnets, which have routes to a public Internet gateway.
  • Two private subnets, which do NOT have routes to the public Internet gateway.
  • A NAT Gateway to allow instances in private subnets to communicate with the public Internet, for example, to pull down patches and upgrades, and access AWS services with public endpoints such as Amazon DynamoDB.
  • Two route tables, one for public subnets and the other for private subnets.
  • Security groups for an app, load balancer, database, and bastion host.

The bastion host template creates a bastion host that provides SSH access to resources you place in private subnets for greater security. Resources placed in private subnets could include application instances, database instances, analytics clusters, and other resources you do not want to be discoverable via the public Internet. For example, along with enabling proper authentication and authorization controls, placing database instances in private subnets can help avoid security problems risked by exposing databases to the public Internet.

After you’ve created your VPC and bastion host, you can optionally create a relational database using the database template. Either a MySQL or PostgreSQL database is created in the Amazon Relational Database Service (Amazon RDS), which automates much of the heavy lifting of database setup and maintenance. Following best practices, the database is created in your VPC’s private subnets and is concealed from the public Internet.

A forthcoming YouTube “how to” video will present a walkthrough of the process of using the templates. In the meantime, the README of the GitHub repository has detailed template usage instructions.


Managing Your Infrastructure with CloudFormation

You can manage your infrastructure on AWS entirely via the AWS console.  However, there are many advantages to following an “infrastructure as code” approach using CloudFormation or similar tools.

CloudFormation provides an easy way to create and manage a collection of related AWS resources, allowing you to provision and update them in an orderly and predictable fashion. Here are some of the benefits of using CloudFormation to manage your infrastructure:

  • Dependencies between resources are managed for you by CloudFormation, so you don’t need to figure out the order of provisioning.
  • You can version control your infrastructure like your application code by keeping your CloudFormation templates in Git or another source control solution.
  • You can parameterize your templates so you can deploy the same stack with variations for different environments (test or prod) or different regions.

Over time, you might find you need to add new resources to the existing resources provided by the Startup Kit templates. For example, if you need to run 1,000 or more instances, you will exhaust the IP addresses available in the existing subnets and will need to add more subnets. Add new resources by modifying the templates and committing the changes in your source control repository, rather than making changes through the AWS console. This makes it easier to track the changes and roll them back if necessary.

For details about the capabilities of CloudFormation and how to write templates, see the CloudFormation documentation. You can declare resources using a straightforward YAML (or JSON) syntax. For example, the following snippet from the VPC template shows the simple syntax for creating the top-level VPC resource. As used in the snippet, CloudFormation’s FindInMap and Ref functions enable dynamic lookup of the CIDR block for the VPC and the name of the VPC stack, respectively:

    Type: AWS::EC2::VPC
      CidrBlock: !FindInMap [CIDRMap, VPC, CIDR]
      EnableDnsSupport: true
      EnableDnsHostnames: true
      - Key: Name
        Value: !Ref "AWS::StackName"


Connecting to Your Instances and Database

In general, it is best to avoid connecting into your instances by using SSH to manage them individually. Instead, manage your instances using a higher-level management service such as AWS Elastic Beanstalk or AWS OpsWorks. For example, when you need to connect to instances for debugging purposes, you can connect via the bastion host created by the bastion template. One way to do this is to use SSH agent forwarding. For details about how to set this up on your local computer, consult the relevant AWS blog post.

Because the database is in a private subnet, it also is necessary to connect to it via the bastion host using a method such as TCP/IP over SSH. For an example of how to do this with MySQL Workbench, see the relevant documentation and the following screenshot.


In the Manage Server Connections dialog box for your database connection, fill in the following values:

  1. For SSH Hostname, type the public DNS name of your bastion host.
  2. For SSH Username, type ec2-user.
  3. Ignore the SSH Password
  4. For SSH Key File, type the path to the EC2 key pair you created.
  5. For MySQL Hostname, type RdsDbURL from the Outputs tab for the database stack in the CloudFormation console.
  6. For MySQL Server Port, type 3306.
  7. For the Username and Password fields, enter the values you chose when you created the database.


Next Steps

After you’ve created your VPC-related infrastructure with the Startup Kit templates, you can add on top of it applications, analytics clusters, and other components using any technologies of your choice. If you’re building an application such as a web app or RESTful API, Elastic Beanstalk can help automate the process of setting up, managing, and scaling your application.

Whichever technologies you use, be sure to place load balancer resources in the public subnets of your VPC, and spin up application instances in your private subnets. Also, make sure to assign the relevant security group created by the VPC template to each of your components. Check the Outputs tab of the VPC stack in the CloudFormation console for the IDs of the security groups, which are prefixed with sg-. Here’s how the security groups should be assigned:

  • The ELBSecurityGroup should be assigned to load balancers, such as Application Load Balancers or Classic Load Balancers. This security group allows load balancers to talk to application instances.
  • The AppSecurityGroup should be assigned to application instances, such as RESTful API servers or web servers. This security group allows those instances to talk to the database as well as the load balancer, and receive SSH traffic from the bastion host.

Besides the fundamental infrastructure templates discussed in this blog post, the Startup Kit also includes the Startup Kit Serverless Workload. Watch out for more Startup Kit information in the near future!


Special thanks to Itzik Paz for providing the architecture diagram at the top of this post

Bringing art to Amazon Alexa on AWS Lambda

by admin | on | in Featured Guests |

Guest post by Daniel Doubrovkine, CTO, Artsy

At a recent Artsy board meeting an investor asked, “You’ve shipped a tremendous amount of software in 2016 with a very small team. How did you do that?”

Indeed, in 2016 we’ve built a new live auctions service and executed over 40 auctions for major auction houses, including Phillips and Heritage. We’ve simultaneously grown our online publishing business to become the most read art publication in the world. And we’ve more than doubled the number of gallery partners on the platform, all while seeing fairly moderate growth of operational costs.

This progress is the result of combining great people, an exceptionally efficient organization, and a systematic approach to creating experiments with the breadth, depth, and virtually unlimited power of AWS. Practically, this means that we evaluate and adopt at least one major new framework with each non-critical project. We develop small ideas as opportunities to learn, and often graduate these into production workloads. For example, last year we tried and have now fully adopted Kubernetes with Amazon ECR. And today we’re exploring AWS Lambda for mission-critical applications after first trying to load data from Amazon S3 to Amazon Redshift to then shipping an Alexa skill that turns the little voice-activated device into an art historian.

In this post, I walk you through developing, deploying, and monitoring a Node.js Lambda function that powers Artsy on Alexa. We implement an Alexa skill that runs on a development machine, deploy the code to AWS Lambda, and enable it on an Alexa device, the Amazon Echo. You can find the complete source code for this process on Github.

First, a bit of context about the Amazon Echo. The device contains a built-in speech recognizer for the wake word, so it’s always listening. After it hears the wake word, a blue light ring turns on, and it begins transmitting the user’s voice (called an utterance) to the Alexa platform that runs on AWS. The light ring indicates that Alexa is “listening.” The Alexa cloud service translates speech to text and runs it through a natural language system to identify an intent, such as “ask Artsy about”. The intent is sent to a skill (a Lambda function) that generates a directive to “speak” along with markup in SSML format, which is transformed into voice and sent in WAV format back to the device, to be played back to the user.

To get started, you need an Amazon Apps and Services Developer account and access to AWS Lambda.


Designing an intent


To get Alexa to listen, you first design an intent. Each intent is identified by a name and a set of slots. The intents have to be simple and clear and use English language words or predefined vocabularies. I started with a simple “ask Artsy about an artist” intent, which takes an artist’s name as input:

   "intents": [
         "intent": "AboutIntent",
         "slots": [
               "name": "VALUE",
               "type": "NAME"

The only possible sample utterance of this intent is “about {VALUE}”. The “ask Artsy” portion is implied.

Alexa supports several built-in slot types, such as “AMAZON.DATE” or “AMAZON.NUMBER”. Because Alexa cannot understand artists’ names out-of-the-box, we had to teach it with a custom, user-defined slot type added to the Skill Interaction Model with about a thousand of the most popular artists’ names on Artsy.



Implementing a skill


Intents are the API of a skill, otherwise known as an Alexa app. Using the open-source alexa-app library from the alexa-js community makes implementing intents easy.

In the following code, we define a “launch” event that is invoked when the skill is launched by the user (for example, “Alexa, open Artsy”). The launch event is followed by the “about” intent that we described earlier:

var alexa = require('alexa-app');
var app = new'artsy');

app.launch(function(req, res) {
        // welcome message
        .say("Welcome to Artsy! Ask me about an artist.")
        // don't close the session, wait for user input (an artist name)
        // and provide a re-prompt in case the user says something meaningless
        .shouldEndSession(false, "Ask me about an artist. Say help if you need help or exit any time to exit.")
        // speak the response

app.intent('AboutIntent', {
        "slots": {
            "VALUE": "NAME"
        "utterances": [
            "about {-|VALUE}"
    function(req, res) {
      // intent implementation goes here

The skill expects a slot value, which is the artist’s name.

var value = req.slot('VALUE');

if (!value) {
  return res
    .say("Sorry, I didn't get that artist name.")
    // don't close the session, wait for user input again (an artist name)
    .shouldEndSession(false, "Ask me about an artist. Say help if you need help or exit any time to exit.");
} else {
  // asynchronously look up the artist in the Artsy API, read their bio
  // tell alexa-app that we're performing an asynchronous operation by returning false
  return false;

We use the Artsy API to implement the actual skill. You can refer to the complete source code for implementation details. There’s not much more to it.


Organizing code


Although the production version of our skill runs on Lambda, the development version runs in Express, using a wrapper called alexa-app-server. It automatically loads skills from subdirectories with the following directory structure:

+--- server.js                  // the alexa-app-server host for development
+--- package.json               // dependencies of the host
+--- project.json               // lambda settings for deployment with apex
+----functions                  // all skills
     +--artsy                   // the artsy skill
        +--function.json        // describes the skill lambda function
        +--package.json         // dependencies of the skill
        +--index.js             // skill intent implementation
        +--schema.json          // exported skill intent schema
        +--utterances.txt       // exported skill utterances
        +--node_modules         // modules from npm install
+--- node_modules               // modules from npm install

The server also neatly exports the express.js server for automated testing:

var AlexaAppServer = require('alexa-app-server');

    port: 8080,
    app_dir: "functions",
    post: function(server) {
        module.exports =;


Skill modes


The skill is mounted standalone in AWS Lambda and runs under alexa-app-server in development. It decides what to do based on process.env['ENV'], which is natively supported by Lambda:

if (process.env['ENV'] == 'lambda') {
    exports.handle = app.lambda(); // AWS Lambda
} else {
    // development mode
    // http://localhost:8080/alexa/artsy
    module.exports = app; 


Automated testing


A Mocha test can use the Alexa app server to make an HTTP request using intent data. It expects well-defined SSML output:

chai = require('chai');
expect = chai.expect;

var server = require('../server');

describe('artsy alexa', function() {
    it('tells me about Norman Rockwell', function(done) {
        var aboutIntentRequest = require('./AboutIntentRequest.json');
            .end(function(err, res) {
                var data = JSON.parse(res.text);
                var ssml = data.response.outputSpeech.ssml;
              expect(ssml).to.startWith('<speak>American artist Norman Rockwell ');


Lambda deployment


The production version of the Alexa skill is a Lambda function without the development server parts.

We created an “alexa-artsy” function with a new AWS IAM role, “alexa-artsy”, in AWS Lambda. We copied and pasted the role URN into “project.json”. This is a file that is used by Apex, a Lambda deployment tool (curl | sh) along with awscli (brew install awscli). We had to configure access to AWS (aws configure) the first time, too.


To connect the Lambda function with an Alexa skill, we added an Alexa Skills Kit trigger.


We also configured the Service Endpoint in the Alexa Skills Kit configuration to point to our Lambda function.


To deploy the Lambda function, we chose apex. You can use apex deploy and test the skill with apex invoke. This workflow creates a new function version every time, including a copy of any configured environment variables. A certified production version of the Alexa skill is tied to a specific version of the Lambda function, which is quite different from a typical server infrastructure. You have all the versions of a given function available at all times and accessible by the same URN.

Logs didn’t appear in Amazon CloudWatch with the execution policy created by default. I had to give the IAM “alexa-artsy” role more access to “arn:aws:logs:::*” via an additional inline policy:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:logs:*:*:*"

The logs contain console.log output. The logs are versioned, timestamped, and searchable in CloudWatch.





You can use to test your skill, or you can use an actual Echo device, such as an Echo Dot. Test skills appear automatically in the Alexa configuration attached to your account and are automatically installed on all devices configured with it.


You can also enable the production version of the Artsy skill on your own device. Ask Alexa to “enable Artsy”.




In this post, we showed how to combine familiar Node.js developer tools and libraries, such as express.js and Mocha, to build a development version of a complete system. We then pushed the functional parts of it to AWS Lambda. This model works very well and removes all the busy work or headaches associated with typical server infrastructure. At Artsy, we now plan to look at larger systems that expose clearly defined APIs or perform tasks on demand, and attempt to decompose them into simpler moving parts that can be deployed and developed in a similar manner.


Find out more on the Artsy Engineering blog or follow me on Twitter.

Optimizing your costs for AWS services: Part 1

by Roshan Kothari | on | in Guides & Best Practices |

Since its inception in 2006, AWS has been committed to providing small businesses and developers the same world-class infrastructure and set of tools and services that itself uses to handle its massive-scale retail operations. In part one of this series on costs, we highlight how you can optimize your budget by using AWS in highly cost-effective ways.

Over the years, we’ve seen plenty of new businesses start and succeed on AWS. For example, Airbnb, Pinterest, and Lyft started as small, innovative businesses, and scaled over time by using AWS resources on demand. AWS strives to provide as many services and tools as possible to help our customers succeed. Customer feedback drives 90-95% of our roadmap.

Stakes for startups are high, so to extend the runway startups must develop their products with minimal spending. At AWS, our economies of scale allow us to pass on the savings to our customers. As a result, we’ve had 59 price reductions since 2006. We now offer more than 90 services, and we work continually with our customers to better understand their workloads so we can recommend best practices and services.

Best practices in the selection and usage of computing services

Consider the following recommendations when assessing the computing services best suited to your needs.

AWS Lambda

AWS Lambda has proven to be a highly useful service for startups because it provides continuous scalability and requires zero administration. There is no requirement to provision or manage servers. Customers are charged based only on the number of requests, the amount of time the code runs, and the memory allocated. With AWS Lambda, customers can run virtually any type of application or backend service.

For some early-stage startups, the AWS Free Tier often is sufficient enough to launch their initial minimum viable product (MVP).

We recommend AWS Lambda for the following:

Workload example
The free tier includes 1M free requests and 400,000 GB-seconds of compute time per month. Thereafter, AWS Lambda is billed at $0.20 per 1 million requests and $0.00001667 for every GB-second used.

For example, if you allocate 128 MB of memory to a Lambda function, execute it 20 million times in one month, and run it for 200ms each time, it would cost you $5.46 per month. For more information, see the detailed calculations on the AWS Lambda Pricing page.

Startups in early stages will not incur any charges if their requests and compute times don’t exceed free-tier usage.

AWS Case Study: Bustle Deck_Lightbulb
Check out how – a news, entertainment, lifestyle, and fashion website catering to women has experienced approximately 84% cost savings by using serverless services like AWS Lambda.

Amazon Elastic Compute Cloud (Amazon EC2)

Amazon EC2 provides resizable computing capacity in the cloud. You have complete control over your computing resources and can spin up resources within minutes, allowing you to scale quickly. You can just as quickly scale down your resources when you don’t need them.

Because AWS has over one million active customers, ranging from solo developers to the largest companies in the world, we offer several different ways to acquire compute capacity that fit different use cases. Some of the common ways our customers are acquiring the compute capacity include:

  • Reserved Instances – Where you reserve instances for a term of one or three year and get a significant discount.
  • Spot Instances – Where you bid for unused EC2 capacity and can run as long as it’s available and your bid is above the Spot market price.
  • On-Demand Instances – Where you pay by the hour for the instances you launch.

Reserved Instances Standard Reserved Instances provide a discount of up to 75% compared to On-Demand Instances. Reserved Instances could be a good choice for you if you have stable and predictable traffic, or if you know you will need your instances for at least a year. As another benefit, you can apply Reserved Instances to a specific Availability Zone, enabling you to launch the instances when you need them.

You can choose from the following payment options depending on the term and instance type:

  • All Upfront
  • Partial Upfront
  • No Upfront – popular among startups because it doesn’t require upfront capital

You also can list Reserved Instances that you don’t need any more for sale at the Reserved Instance Marketplace if more than a month’s term usage is left. Buying Reserved Instances from the Marketplace often results in shorter terms and lower prices.

You also can get Convertible Reserved Instances, which are available for a three-year term. They provide a discount of approximately 45% compared to On-Demand Instances and offer flexibility in changing instance attributes. The conversion requires new instances to be of the same or greater value.

We recommend Reserved Instances for the following:

  • Instances that must be online all the time and have steady or predictable traffic
  • Any baseline usage, while using On-Demand or Spot Instances for bursts
  • Applications that might require reserved capacity
  • Customers who can commit to using EC2 over a one-year or 3-year term

As an example, the following table shows approximate pricing for an m4.large Reserved Instance in the US East (N. Virginia) Region as of January 9, 2017. The pricing example assumes that the instance is online for one year.

Standard One-Year Term

Payment Option



Effective Hourly

Savings over


No Upfront





$0.108 per Hour

Partial Upfront





All Upfront





  • On-Demand – $946/yr.
  • No Upfront Standard one-year term – $648/yr.
  • Partial Upfront Standard one-year term – $551/yr.
  • All Upfront Standard one-year term – $541/yr.

The following table shows approximate costs if the same instance is online continuously for three years.

Standard Three-Year Term

Payment Option



Effective Hourly

Savings over


Partial Upfront





$0.108 per Hour

All Upfront





  • On Demand – $946/yr.
  • Partial Upfront Standard three-year term – $376/yr.
  • All Upfront Standard three-year term – $354/yr.

The following table shows approximate costs if the instance is online continuously for three years, and you want the flexibility to change instance attributes.

Convertible Three-Year Term

Payment Option



Effective Hourly

Savings over


No Upfront





$0.108 per Hour

Partial Upfront





All Upfront





  • On-Demand – $946/yr.
  • No Upfront Convertible three-year term – $586/yr.
  • Partial Upfront Convertible three-year term – $502/yr.
  • All Upfront Convertible three-year term – = $487/yr.

AWS Case Study: DropcamDeck_Lightbulb
Check out how Dropcam is saving around 67% on costs by using Reserved Instances.

Spot Instances – If you run a workload that isn’t time sensitive, then Spot Instances could be beneficial for you. These instances allow customers to bid on unused EC2 computing capacity for up to a 90% discount compared to On-Demand Instance pricing. Spot
can be used until the bid expires or the Spot Market Price is above the bid. The Spot Market Price is set by Amazon EC2 and fluctuates periodically depending on the supply and demand of Spot Instance capacity.

We recommend Spot Instances for the following:

  • Workloads that aren’t time sensitive and could be interrupted
  • Hadoop/MapReduce type jobs with flexible start and end times
  • Testing software or websites: load, integration,canary, and security testing
  • Transforming videos in different formats
  • Log scanning or simulations, typically performed as batch jobs
  • Simulations ranging from drug discovery to genomics research

AWS Case Study: GettDeck_Lightbulb
Check out how Gett, an Israeli-based startup that connects people with taxi drivers, is saving $800,000 annually by taking advantage of Amazon EC2 Spot Instances.

AWS Case Study: Fugro RoamesDeck_Lightbulb
Check out how Fugro Roames’ use of AWS and Spot Instances have enabled Ergon Energy to reduce the annual cost of vegetation management by A$40 million.

Bidding strategies

  • Refer to historical bidding prices on Spot Bid Advisor while bidding. This could reduce the chances of interruption in case the demand for Spot Instances increases.
  • Use older generation Instances for bidding because their prices are more stable, reducing the chances of interruption, whereas popular Spot Instances tend to have volatile Spot pricing.
  • Use On-Demand pricing as a baseline price, where bidding around that could reduce the chances of interruption. Customers are charged the current Spot price and not the bid price that they set.
  • Bid using Spot Blocks which enable Spot Instances to run continuously for up to six hours at a flat rate, saving up to 50% compared to On-Demand prices.
  • Test applications on different instance types when possible. Because prices fluctuate independently for each instance type in an Availability Zone, it’s possible to get more compute capacity for the same price when you have instance type flexibility.
  • Bid on a variety of instance types to further reduce costs and improve application performance.
  • Use Spot Fleets to bid on multiple instance types simultaneously.

On-Demand Instances – These instances are billed hourly with no long-term or upfront payment. You can add or remove instances from the fleet, so you pay only for the specified hourly rate for the instance type that you choose.

We recommend On-Demand Instances for the following:

  • Testing applications on EC2 instances and terminating them when not in use
  • Any baseline usage, while using additional On-Demand or Spot Instances for bursts
  • Applications with short-term, unpredictable, or spiky traffic that cannot be interrupted
  • Customers who want flexibility to switch between instance types, without upfront payment or commitment

AWS Case Study: AdRollDeck_Lightbulb
Check out how AdRoll is using AWS global infrastructure and services and a combination of On-Demand, Reserved, and Spot Instances to reduce fixed costs by 75% and annual operational costs by 83%.

Overall recommendations:

  • Start with On-Demand Instances and assess utilization for Reserved Instances.
  • For steady state usage pattern, pay All Upfront for 100% utilization.
  • For predictable usage pattern, pay All Upfront or Partial Upfront with a low hourly rate for baseline traffic.
  • For peak times, supplement your capacity with On-Demand Instances.
  • For uncertain and unpredictable usage pattern, start small with On-Demand Instances. If your usage pattern grows and becomes more consistent, switch partially to Reserved Instances or walk away, thereby spending very little for On-Demand Instances for a short period.
  • Use a mix of Reserved Instances, On-Demand Instances, and Spot Instances. This allows you to find the right balance of saving money while keeping your resources online all the time.
  • Use Reserved Instances for baseline usage, and beyond the baseline usage scale using Auto Scaling by launching Spot Instances or On-Demand Instances.


Auto Scaling

Auto Scaling lets you scale EC2 capacity up or down either when you experience spiky or unpredictable traffic, or for predictable or scheduled traffic, ensuring you only add the resources you require, as per the conditions or policies defined. This is essential for new businesses that are unsure of the traffic their application might experience.

We recommend that you start small and let an Auto Scaling Group add more instances automatically if required. Auto Scaling is well suited to applications that have stable demand patterns, and to applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch, a resource monitoring tool, and carries no additional fees beyond the resource costs incurred by services provisioned by Auto Scaling.

You can set Auto Scaling in conjunction with Cloudwatch to choose from many metrics, including average CPU utilization, network traffic, or disk reads and writes by percentage or by constant readjustment. Example policies include the following:

  • If CPU utilization is greater than 60% for 300 seconds, add 30% of the group.
  • If CPU utilization is 80-100% for 300 seconds, add two instances.
  • If CPU utilization is 60-80%, add one instance.
  • If CPU utilization is 0-20%, remove two instances.The minimum number of instances is five.

You can use a combination of Reserved, On-Demand, and Spot Instances in an Auto Scaling group to expand or shrink. You can use Reserved Instances for baseline utilization and configure an Auto Scaling Group to add capacity by launching Spot Instances or On-Demand Instances.

We recommend Auto Scaling for the following:

  • On and off traffic patterns, such as testing or periodic analysis
  • Unexpected growth such as events or applications getting heavy traffic
  • Variable traffic such as seasonal sales or news and media
  • Consistent traffic such as high use of resources at a particular time range each day (for example, during business hours)

AWS Case Study: LyftDeck_Lightbulb
Check out how Lyft is using Auto Scaling to manage up to eight times more passengers during peak times and scale down at other times to stop paying for those resources.

AWS Case Study: Gruppo EditorialeDeck_Lightbulb
Check out how Gruppo Editoriale L’Espresso, an Italian multimedia firm, is using Auto Scaling for unpredictable traffic that sometimes grows as much as 300% if a story breaks.


AWS is constantly working towards reducing prices and, with the economies of scale, passing on the advantages to our customers. With the variety of purchasing options available through AWS, customers are designing their own pricing model plans as per their requirements. At AWS, we are committed to listening and responding to customers. Our commitment is to provide them with an infrastructure that is highly secure, reliable, and scalable, allowing them to focus on the products and features that differentiate them.

What’s Next?

Upcoming in this series will be part 2, discussing the ways to optimize cost for storage and other AWS services. If you’re in NYC and want to learn more about Cost Optimization on AWS then join us at the AWS Loft for Running Lean Architectures on AWS + Office Hours. Register now!

Startup migration: around the world to AWS

by Andrea Li | on | in Startup Migration |

In 2016, we saw startups of all shapes and sizes migrate to Amazon Web Services (AWS). In true Amazon fashion, we dived a little deeper into our data to take a look why. Here are five startups we invited from China, India, Israel, and the US to share 100 words on their migration.

  • AWS knows what market leaders want
    Maybe it’s luck, maybe it’s experience. We’d love to believe that Houzz fell for the latter. Houzz is a US-based platform for home remodeling and design. It’s a place to find the right design and contact the best construction professionals. With a community of 40 million homeowners, home design enthusiasts, and home improvement professionals, Houzz is a leader in their field, who wants to partner with a leader in the cloud. In 2016, Houzz needed additional scale and we had 70 services ready for them to cherry-pick from.Architecting, infrastructure, build… we have their back, and our services are always the most updated (without the delays and the noise).
  • AWS helps startups expand internationally easily
    The AWS footprint is worldwide across 16 geographical regions and 42 Availability Zones. And that’s exactly what Ibibo Group wanted to leverage when they migrated to AWS. Ibibo is India’s largest online travel group with services ranging from hotel booking, to bus ticketing, and to vehicle tracking and car sharing. With over three million unique users transacting every month, Ibibo migrated to AWS to take advantage of our breadth of services as they continue to expand their business internationally.A worldwide travel group like Ibibo needs a universal cloud provider that’s strong and reliable in every region.
  • AWS helps scale the freshest ideas
    For example, China’s largest e-commerce platform that imports fresh fruit products from all around the world. Fruitday took a risk back in 2015 and launched its app to try and grow its traditional fruit retail business. Within a year, they scaled to over 10 million users and grew 100%. Fruitday decided to migrate to AWS last year for our reliability, customer service, and ability to help them optimize their unique supply chain needs seamlessly.We scale when there’s demand and growth – our flexibility is always in-season.
  • AWS saves you money
    Seeking Alpha
    is a platform for investment research that is crowdsourced by investors and industry experts. With four million registered users, seven million unique monthly visitors, and a vast coverage of stocks, asset classes, and ETFs, Seeking Alpha enjoys our pay-per-use scheme to lower costs. Whether it’s a new moderator’s profile or 6,000 daily comments, you use (and pay) for what you need. No long-term contracts or upfront comments. We make sure our startups are operating efficiently and, more importantly, cost-effectively.Trust the finance guy to know: AWS makes sense financially.
  • AWS is easy to use
    The smart folks over at the brain-training company tend to think so. Lumos Labs (you might be more familiar with their product Lumosity) is a US-based company used by 85 million people worldwide. Their 25+ brain games challenge their users’ memory, attention, flexibility, speed of processing, and problem solving. Lumos Labs leveraged some of the innovative services of AWS like Amazon Redshift to find the simplest solutions.With all that brain training, no wonder they thought migrating over was a no-brainer.

And there we have it. Five top startups around the world that migrated to AWS to solve very different problems. Technologically, they picked us due to our ease of use, cost-effectiveness, and scalability. Business-wise, they picked us because we understand the needs of global industry leaders.

But regardless of whichever reason above, we know our customers truly love us because we always put them first. AWS continuously strives to serve all of our customers by pushing the envelope to innovate on their behalf. When you have a need, we strive to provide a service that meets that need for you. And that’s why we’re excited to walk alongside you, through your journey in the clouds.

The Israeli recipe (and no, it isn’t for hummus this time)

by Andrea Li | on | in An Insider's View |

By Noam Kaiser, VC Business Development, AWS

Israel is a startup superpower. You probably already knew that, but some of these facts and figures might surprise you:

  • Population: 8.06 million people[1]
  • Hi-tech exports account for 45% of GDP[2]
  • Technology companies: 5,720[3]
  • 1 in 12 startups globally is Israeli, making it the country with the highest number of startups per capita[4] (the same applies for lawyers, and I hope the two aren’t related)
  • R&D centers of multinational companies: 281[5]
  • Companies listed on NASDAQ: 76, ranked 3rd after the US and China[6]
  • Ranked 2nd following Switzerland in intellectual property (IP) innovation[7]
  • Exits amounting to $9.8B in 2014 and $9.2B in 2015[8]. Considering the IPO crunch, these records will not be broken this year.
  • Capital raising amounting to $3.4B in 2014 and $3.6B in 2015[9]. Considering the VC response to the same crunch, these records probably WILL be broken this year.

Pretty impressive stuff, and if you’re wondering how it happened I’ll get to that soon. But first let’s look at a fascinating change that’s been going on in the ecosystem in recent years. The best way to do that is to compare the startup environment five years ago with today’s environment.

Since 2011, there’s been a clear trend of a rising number of VC-backed companies, postponing a quick exit and advancing towards higher valuations and annual sales[10].

  • Over 30 Israeli IT companies crossed the $400M valuation threshold, compared with only three in 2011
  • Over 20 Israeli IT companies crossed the $100M annual sales threshold, compared with only six in 2011
  • 100 Israeli IT companies crossed the $10M annual sales threshold, compared with 50 in 2011
  • 8  Israeli IT companies performed NASDAQ/LSE IPOs and There were 16 IPOs in 2014, with an average valuation of $1.75B, compared with none in 2011

Yep, the numbers tell the story: The “startup nation” is changing into the “scale-up nation.”

In addition to the numbers, here is what we see happening on the ground:

  • VCs/LPs are more patient with turnover timing, aiming higher.
  • The local scene boasts seasoned entrepreneurs, engaging in their 2nd, 3rd, and even 4th run.
  • Funding is growing. Local firms, global firms, and corporate VCs are investing more, giving Israeli startups the greatest accessibility ever to growth-stage funding, the biggest cut of VC funding for eight quarters now. (IVC)
  • Foreign management is demonstrating an increasing willingness to join the leadership teams of Israeli companies, helping them grow.
  • More Israeli companies than ever are employing hundreds of employees, and some will grow to four digits soon. More and more foreign employees are coming in. These are signs of growth.
  • More Israeli startups are acquiring other startups globally. For example, IronSource, Jfrog, SimilarWeb, and Taboola have all acquired startups. They’re growing their businesses with new talent, features, and technologies. Some startups are even gaining additional market share.
  • The state of mind is there. You’ve come to expect young Israeli startups to disrupt certain industries. It is now completely conceivable to expect some to lead certain industries. For example, the following companies are leaders in these sectors:
    Big data: IronSource, SimilarWeb, SiSense,,, Windward, Dynamic Yield
    Ad tech, content, and discovery: OutBrain, Taboola, MinuteMedia, Kaltura, PlayBuzz, YotPo, EyeView, SundaySky
    DevOps and infrastructure: Jfrog, Redis Labs, Cloudinary, SpotInst, Velostrata, Stratoscale
    Enterprise software: WalkMe, Capriza, Gong, SAmanage
    GIS and automotive: Moovit, Gett, Nexar, Innoviz, Via (and, of course, success stories like Waze and MobileEye)
    Cyber security and fraud prevention: BioCatch, Forter, Riskified, Cybereason, Argus, Coronet, Guardicore, Morphisec, Dome 9, EnSilo, Minerva Labs


So how did Israel get there?

How did Israel turn into a startup hub in the first place? I believe it’s due to a unique combination of factors:

  1. Israel is an immigrant country. The community of Israelis include individuals from all cultures and schools of thoughts. This diversity generates creative perspectives and new approaches.
  2. Necessity is the mother of invention. Unique security, agriculture, energy, and other needs brought about innovation across a range of fields including military, communications, medicine, irrigation, and solar energy. With limited trade with neighbors, nor significant natural resources, innovation was Israel’s only choice.
  3. Academic education. From its inception, Israel has emphasized the importance of academic education. Despite its small size, Israel has five of the top 500 leading academic institutions globally.
  4. Investment in innovation. 5% of Israel’s budget is invested in high-tech companies of all stages, through various plans and grants of the Office of the Chief Scientist in the Ministry of Economy, in order to increase the amount of Israeli IP and successes.Israeli also enjoys a vibrant and experienced VC ecosystem, now in its third decade, made up of local veteran VC firms, a promising wave of young local VCs, and a vast local presence of top corporates that are increasingly active.n addition, 281 multinational tech companies have R&D centers in Israel, including Amazon, Apple, Google, Intel, Microsoft, Qualcomm, Samsung, and Facebook. In fact, only the US hosts more multinational companies R&D centers than Israel. Usually it begins with a startup acquisition.This creates a cycle: Israelis gain experience with multinational acquiring companies, they launch new startups, global leaders acquire them and set up shop in Israel, and so on.
  5. The IDF technologies. Military tech developments find their way into private market applications, giving companies a unique global edge. A great example is Given Imaging, a medical devices company.
  6. Unique intelligence and data tech capabilities combined with the Military Service effect. Military intel and cybersecurity units like 8100 and 8200, along with other intelligence agencies, incubate some of the best Israel human resources. Young Israelis undergo unparalleled training and engage in sci-fi like activities. Through the military service, young Israelis learn what responsibility, true challenges, and mature proportions are. This molds many Israelis into natural problem solvers, potential entrepreneurs, and leaders. After they complete their service, they often complement their skills with business education. Guess what happens next. Yep, startups.
  7. Local market too small. From day one, Israeli startups think globally; there’s no point in aiming locally. They never use a “” domain, it’s all “.com” (Well, recently it’s “.io”). Incidentally, that’s partly why many Israeli solutions aim at giving the little guy a chance to play in the big league, for example, Fiverr, EatWith, DubaMobile, Wix, and ProoV.
  8. Chutzpa (also known as chutzpah). It’s a Hebrew word that describes a bold attitude, something like “Of course our underfunded, three people strong, Middle East-based startup will beat up Fortune 500 companies! What other option is there?
  9. Efficiency over protocol and hierarchy. By the time a non-Israeli company finishes outlining its company structure, product road map, and work plan, an Israeli startup will have its beta product ready for installation. It’s partly because everyone does everything in an Israeli startup: The VP of Product gets involved in sales, the R&D team joins marketing sessions, and so on.It all seems like an effective mess that somehow works…until it doesn’t. Israeli startups dash through the seed and early stages because everyone does everything. However, startups need more structure when they hit the growth stage with 100+ employees, and when there’s a risk that senior management might relocate. Local and global investors can help these startups add more structure during the growth stage. But the main point is that the Israeli casual approach works well as an early-stage method.
  10. Tolerance towards failure. Israeli investors are very “forgiving” compared with other global counterparts, with regards to past failures of a startup founder. The fact you failed COULD mean that you’ve learned, so past failure don’t necessarily get you shunned.


And then?

This unique combination of factors has made Israel an exciting, thriving hub for startups. Looking ahead, Israel will need to keep the momentum going by striving for new heights. It will present both a challenge and an opportunity for the Israeli VC and startup community.

In a sense, that’s the same challenge—and opportunity—that we at Amazon Web Services aim to help each and every startup with.

Noam is the VC Business Development Manager for Israel, Portugal, and Spain. He has been active in the Israeli VC and startup arena for over a decade now, working as a principal for two VCs (Gemini and Ofer Hi Tech) and CEO/VP of two startups (VentureApp and BAlink).



[1] Worldbank (
[2] Ministry of Treasury (
[3] IVC (
[4] StartupBlink (
[5] Ministry of Economy (
[6] NASDAQ (
[7] IMD World Competitiveness Center (
[8] KPMG IVC Survey (
[9] Ibid
[10] IVC (

The Alexa Fund and the new Alexa Accelerator

by Andrea Li | on | in An Insider's View |

By Rodrigo Prudencio, The Alexa Fund

Just over 18 months ago, we set out to build Amazon’s first dedicated corporate venture capital fund with the same mindset of any new Amazon experiment: Work Hard, Have Fun, Make History. And we’re doing just that. The Alexa Fund, with an initial $100MM investment commitment, has already made 23 investments in companies committed to building delightful experiences using voice as a primary interface.

The Alexa Fund is named after the voice technology that powers Amazon products like the Echo, Amazon Tap, and Echo Dot, as well as Amazon Fire TV and Fire tablets. Alexa’s voice capabilities are purposefully built so developers can use the Alexa Skills Kit (ASK) to create new voice experiences for Amazon’s devices, or Alexa Voice Service (AVS) when they want to embed Alexa into a third-party device.

In building the portfolio, we’ve applied investment criteria that reflect best practices from the venture capital community. We favor great teams with a passion for building world-class products and companies that are differentiated against their competitors. We work with VCs, angels, and other forms of institutional capital as co-investors, leveraging their network and company-building expertise.

The stage of company is also a consideration, but we’ve shown interest in backing small and early-stage companies as well as more mature companies. Defined Crowd, for example, is an investment in a small team using crowdsourcing to build voice services such as transcription, annotation, and lexicons on behalf of large enterprises around the world. Ecobee, on the other hand, is a well-established builder of advanced thermostats capable of networking together to provide system-wide energy savings. Regardless of stage, we back up our dollars with support from the Alexa organization to help portfolio companies solve technical challenges and develop effective go-to-market strategies.

It’s an honor to work alongside promising entrepreneurs who are innovating in ways Amazon may never have imagined. But as we say at Amazon, it’s still Day 1 and we have much more to do to expand Alexa’s presence and capabilities.

That’s why we created our latest initiative, the Alexa Accelerator, powered by Techstars. The Alexa Accelerator will accept about 10 companies to participate in a 13-week startup course running from July to September 2017. We will seek out companies tackling hard problems in a variety of domains—consumer, productivity, enterprise, entertainment, health and wellness, travel—that are interested in making an Alexa integration a priority. We’ll also look for companies that are building enabling technology such as natural language understanding (NLU) and better hardware designs that can extend or add to Alexa’s capabilities.

The Alexa Accelerator is just one more way in which Amazon is working closely with startups and investors. We hope we’ll see many of our existing VC friends and meet new ones when we reveal a group of new companies at the Alexa Accelerator demo day in October.


Rodrigo is a member of the Alexa Fund, the Amazon team investing in startups to support the Alexa environment. Prior to Amazon, Rodrigo founded Shuddle and led energy-related IT investments for Nth Power.

The Insider’s View series is a collaboration between different teams at Amazon. It shares our company’s unique insights, products we’re developing, reasons for our business focus, and most importantly, how our peculiar culture enables us to lead by always placing our customers first.

A look inside Vidora’s globally distributed, low-latency A.I.

by admin | on | in Featured Guests |

Guest post by Philip West, Founder, Vidora

vidora_aws_logo copy
Artificial Intelligence (A.I.) has dominated the tech headlines throughout 2016, and it shows no signs of letting up as we kick off 2017. While the tech giants push A.I. in their own specific ways, many other businesses are looking for solutions to stay up to speed and effectively apply A.I. to optimize their organizations and meet goals. This has opened the door for many startups to enter the market as well.

At Vidora, we look forward to helping push this innovation forward as 2017 gets underway. Vidora offers a specialized A.I. platform that enables premium media, commerce, and consumer brands like News Corp, Walmart, and Panasonic to increase user retention by predicting the lifetime value of users and by automatically increasing value using 1-to-1 personalization.

Building a specialized A.I. for your own business, let alone a general one for the masses, is difficult. It’s expensive to build and maintain, it’s hard to reliably test at scale, and it takes time and patience to allow machine intelligence to learn and mature. The Vidora team has spent countless hours building and evolving our solution. One big reason that we’ve been able to make such great progress on our A.I. and adapt it to companies of large scale is the flexibility and functionality provided by AWS. In this post, we give you a peek inside how Vidora’s system works, what tools we’ve used, and how AWS has helped make this complex technology a reality.

Vidora on AWS diagram

Data ingestion

Vidora’s A.I. starts with data ingestion. Sharing data with others can be a pain due to the infinite number of ways it can be structured and organized. Fortunately, AWS already provides numerous ways to share data, making the process easier. A user’s behavioral events are sent to us via a variety of methods: Vidora’s API, Amazon Kinesis, Amazon Redshift, and custom pull-based systems that pull from Amazon Simple Storage Service (Amazon S3). This data includes an anonymous but unique ID for the user, and also some information on what that user just “did,” such as read an article, clicked a link, watched a video, liked a post, etc. We translate each of these to a dataframe friendly format that gets stored in S3 every few minutes for Spark processing. As this data comes in, other various analytics get stored in both Redis and Cassandra as well.

Modeling & profile generation

Once we have the data, the next step is building our A.I. models. This lies at the heart of what Vidora does. Each of our customers has different scales of the amount of data we need to process, and varying intervals for how often the underlying A.I. models need to be updated. They also have different business goals, each with unique needs and constraints. For example, one customer might need to send weekly personalized emails, while another might need to optimize push notifications in near real-time.

Given the large amounts of data and the variations in the output required, Vidora needs a tool that provides the ability to run fast map-reduce jobs as well as a simple solution for investigatory data science. Vidora has found Spark on Amazon EMR to be a great fit. Spark interfaces nicely with Python and enables us to execute Panda’s dataframe operations at scale without having to do much around code optimization or pre-defining queries. Amazon EMR provides us a simple solution to spin up Spot clusters with Spark on various schedules and with custom parameters, and then spin them down once the jobs are finished, ultimately saving us money. By using Spot clusters, we typically see savings of 80% when compared with the on-demand price.

Once the machine learning models are generated, our queue-based processing system spins up Spot instance worker machines in Amazon EC2 that build and constantly update profiles for the most recently active users. This means we can update a user’s profile minutes after their last activity. These profiles contain information such as each user’s likelihood of returning to our customers’ products and services, what marketing channel is the most effective to reach them, when a message should be sent, and what specific content the user is most likely to engage with. Vidora has written its own machine learning algorithms to identify these characteristics of the profiles. Included in that process is a layer of information-theoretic techniques that assess the importance of each feature’s influence on user retention or any other high-level goal the customer has. This allows us to predict beforehand whether a specific action or set of content will have a positive or negative impact on the user’s loyalty, and by how much.

To manage the worker cluster that builds the profiles, we recently began using Spot fleet configurations. With Spot fleets, we now can get the best-priced computing power across a variety of instance types and Availability Zones, with no effort on our end other than the initial setup. It’s also trivial to adjust the size of our fleets using the aws-cli tool and its modify-spot-fleet-request command, which allows us to auto-scale the fleet size based on how large our processing queue is.

Multi-region Cassandra configuration

User profiles can take up a bit of space. Especially when you’re working with customers of Walmart’s or News Corp’s scale. These customers also require global coverage because many of them own multiple properties in various locations. To meet these requirements, we store our user profiles in Cassandra running in a multi-region replicated configuration, so that every user’s profile is available from a multitude of geographical locations. This allows us to do the high-cost processing in only one region.

All writes happen in the U.S. East (N. Virginia) Region, but then are seamlessly sent to the other regions via DataStax’s Ec2MultiRegionSnitch for Cassandra. To ensure our reads are as low-latency as possible, we use Cassandra’s LOCAL_ONE consistency for reads, meaning we return the first result we find from the local region without double-checking any other replicas for consistency. Using this strategy, we risk the data becoming gradually inconsistent, so we run full repairs daily to correct them in the background. We ensure the clusters have enough CPU and I/O capacity to always have a repair running without impacting latency.

Low-latency APIs

After we build and store user profiles, customers need to access them via APIs on a per-user basis, whether it’s for emails, push, or web experiences. These APIs often lie in the critical path of each user’s experience, thus demanding extremely low latency and high reliability across the world.

As mentioned earlier, we store user profiles in Cassandra across several regions, which improves the lookup times for user profiles to meet these low latency conditions. Similarly, our API servers are also deployed in the same regions, helping to decrease the time customers spend waiting for a response. We also aggressively cache much of our data with Redis to ensure even lower latency for most of our results.

Finally, we use Amazon Route 53 for DNS, specifically Route 53 latency-based routing and health checks to ensure each region is healthy. This satisfies both low latency and high availability: When everything is up, the customer talks to the nearest region for the fastest response, but if we lose a region, our Route 53 DNS failover configuration reroutes to a healthy region, providing reliability.

What’s next

While building an A.I. is difficult, AWS services dramatically simplify the architectural decisions you need to make as well as the tactical steps you need to take to manage the system. You can store massive amounts of data at very affordable rates, spin processing clusters up and down with the latest and greatest map-reduce frameworks, and address a global audience quite easily with a suite of cloud-computing services. At Vidora, we hope that our learnings from building an A.I. in the cloud can benefit others, and we look forward to hearing more about the innovative ways others decide to use AWS as we enter 2017, the year of A.I.

Please reach out to us at or sign up for our talk at the AWS Loft in San Francisco to learn more!

Ensuring your investments invest in the right technology

by Mackenzie Kosut | on | in An Insider's View |

As an investor you’ve developed an exceptional ability to invest in the right startups. These companies combine proven business models with a solid technical foundation to build some of the most exciting products around the world. But how do you ensure your investments are investing in the right technology?
Making the wrong technical choices early on can bring on technical debt that can easily bring a 10x return on your growing pains. Ask any big enterprise how they decide on technology, and they’ll run you through their technical evaluation process. This process includes reviewing software capability matrices, reading case studies, talking to other customers, and running multiple proofs of concept on development workloads. Startups don’t have the resources or time to conduct such extensive evaluations. Startups need to make quick decisions about their technical stack, but as a result the risk of getting it wrong is much higher.
Let’s talk about some ways to help you and your thriving companies avoid common pitfalls that early stage companies make:

  1. Small teams benefit from well-adopted technologies
    There is a high cost of adopting early technologies, especially ones that have not been battle-tested in large production environments. Everyone wants to build with the latest hip language, but rarely does the new language provide enough of a benefit that it outweighs the learning curve or risk of early adoption. Never underestimate the track record of a well-established and utilized language or service. Well-adopted languages and services have more technical resources that are available online, and troubleshooting issues will be less time consuming. Also, it will be easier to hire engineers who are familiar with the technology. More time can be focused on what matters: building a better product.

  3. The Do-It-Yourself mentality is great, but not when it comes to critical services
    Startups all have their domain experts: The MySQL seasoned “big data” expert, the Elasticsearch guru, and so on. These engineers are valuable assets for a company to have in the early days but can be costly if this expertise lures a company away from the reliability and operational advantages of managed services. It’s easy for a small company to manage a small cluster, but as usage grows so does the operational overhead. Managed services such as Amazon RDS, Amazon Elasticsearch Service, Amazon Redshift, Amazon Kinesis, and Amazon Aurora mean that clusters can be managed, updated, and backed up by AWS. Remember that every company needs a database, a server, and more. Any time spent managing these common infrastructure components is time that could be better spent on developing new products and services.

  5. Build mentorship programs early on
    If you have a less experienced technical lead at a company, help provide them with a mentor. Mentors fill the void where previously these engineers had the ability to bounce ideas off of a trusted colleague. Mentors can help your teams make the right technical decisions by understanding the familiar challenges their engineering team is facing. Great mentors likely already exist among other companies in your portfolio. Ask around about who is a strong leader in specific areas of focus such as infrastructure, security, front end, data, AI, marketing, etc. This exposure to a variety of challenges will also help your mentors bring new perspectives and solutions back to their own projects. In summary, having an experienced individual that companies can engage about regarding architectural decisions, data design, security, and more will help keep your early-stage company feeling confident about their decisions.

There are many more best practices and recommendations that we will cover in upcoming articles. These three pillars are a good start for helping you and your companies figure out what technology makes sense for you today. Providing these companies with access to the right resources, with the right guidance, will help set up these teams for long-term success.

Mackenzie Kosut is the Global Startup Evangelist at Amazon Web Services (AWS). Prior to AWS he worked at Betterment, Oscar, Tumblr, and more. Mackenzie travels the globe seeking out groundbreaking startups on AWS, sharing the cool things they’re doing through blog, live video, and social media.

Top 10 most read Startup blog posts in 2016

by Rei Biermann | on | in Announcements |

Take a look through our list of the top ten most read blog posts from 2016! Get caught up on ones you may have missed or share your favorite. If you are an AWS startup solving unique challenges, building innovative technology or approaching conventional problems in unconventional ways – we want to hear from you! We welcome ideas, suggestions, and feedback to help improve the AWS Startups Blog experience.

Don’t forget that you can subscribe to our blog’s RSS feed to stay up to date on our latest posts. Share your story, check out what’s new at the AWS Pop-up Lofts, sign-up for AWS Activate, and read up on AWS Hot Startups!