Tag: Advice from AWS Solution Architects
The qualities that make an uber-successful tech startup founder are a complex, even mystical, blend of traits. As we celebrate the stars of our AWS Software Startup awards, meet three incredible founders who share similar key characteristics and are blazing a distinctive trail.
We occasionally run into startups that built their initial MVP on Firebase but desire to switch to AWS to achieve operations at scale with better data quality and reliability guarantees, and at lower cost. With Firebase consisting of proprietary services, APIs, and an SDK, a migration to AWS requires application refactoring – introducing a new architecture using AWS services, and rewriting parts of the codebase to use them accordingly. To minimize the disruption of this refactoring, this guide will help you identify what AWS services are best suited for your startup’s new architecture along with some implementation strategies to ease and accelerate the cutover.
AWS IoT Core has many features that tackle different challenges IoT customers often have. It can be overwhelming at times to read about them in different places and figure out what exactly to use them for. In this blog post, we go into the different components of AWS IoT Core and walk you through an example of how a fictional startup will use the different components of AWS IoT Core to their benefit.
In this post, we present a systematic approach to guide customers migrating a few commonly used cloud data analytics services from Google Cloud Platform (GCP) to AWS. Rather than a detailed step-by-step implementation guide for a specific service, the post is intended to provide a holistic view and systematic approach for the migrations of these GCP services to AWS.
For most machine learning startups, the most valuable resource is time. They want to focus on developing the unique aspects of their business, not managing the dynamic compute infrastructure needed to run their applications. Productionizing machine leaning should be easier, and that’s where AWS comes in. In this blog post and corresponding GitHub repo, you will learn how to bring a pre-trained model to Amazon SageMaker to have production-ready model serving in under 15 minutes.
Being able to choose really powerful instances to reduce your training time on demand, paying only for the seconds you use them, and at the same time having the choice of your notebook instances in your favorite tooling opens large opportunities for cost savings and productiveness across startups. AWS Startup Solutions Architect Manager Daniel Bernao walks us through how to do it.
Want to build a database-backed website, or the backend to a mobile app? Set up a WordPress or Drupal site, or just use an Amazon S3 bucket to store files? You can do all this and much more on AWS.
In this second installment of the Scaling Down Infrastructure series, we are looking into cost optimization techniques for your databases, on the popular engines we see you using the most, whether it’s in an analytical or transactional style, or if it’s relational, document, key value or time series in nature.
AWS Startup Solutions Architect Paul Underwood believes that a data lake is just another complex and heterogeneous infrastructure problem. In this post, he illustrates how you might build a data lake-as-code using the AWS Cloud Development Kit (CDK). Underwood will outline the strategy, core data lake services used, associated costs, and how you can tie it all together with code.
In part one of this blog post series, we will look at four best practices to help reduce AWS spend with quick wins, all achievable in under 2 hours each.