AWS Blog

AWS Hot Startups – April 2017

by Ana Visneski | on | in Startups | | Comments

Spring is here, the flowers are blooming and Tina Barr is back with more great startups for you to check out!

-Ana


Welcome back to another month of hot AWS-powered startups! Today we have three exciting startups:

  • Beekeeper – simplifying employee communication in the workplace.
  • Betterment – making investing easier for everyone.
  • ClearSlide – a leading sales engagement platform.

Be sure to check out our March hot startups in case you missed them.

Beekeeper (Zurich, Switzerland)
Beekeeper logoFlavio Pfaffhauser and Christian Grossmann, both graduates of ETH Zurich, were passionate about building a technology that would connect and bring people together. What started as a student’s social community soon turned into Beekeeper – a communication platform for the workplace that allows employees to interact wherever they are. As Flavio and Christian learned how to build a social platform that engaged people properly, businesses began requesting a platform that could be adapted to their specific processes and needs. The platform started with the concept of helping people feel as if they are sitting right next to each other, whether they’re at a desk or in the field. Founded in 2012, Beekeeper is focused on improving information sharing, communication and peer collaboration, and the company strongly believes that listening to employees is crucial for organizations.

The “Mobile First, Desktop Friendly” platform has a simple and intuitive interface that easily integrates multiple operating systems into one ecosystem. The interface can be styled and customized to match a company’s brand and identity. Employees can connect with their colleagues anytime and anywhere with private and group chats, video and file sharing, and feedback surveys. With Beekeeper’s analytical dashboard leadership teams can identify trending topics of discussion and track employee engagement and app usage in real-time. Beekeeper is currently connecting users in 137 countries across industries including hospitality, construction, transportation, and more.

Beekeeper likes using AWS because it allows their engineers to focus on the things that really matter; solving customer issues. The company builds its infrastructure using services like Amazon EC2, Amazon S3, and Amazon RDS, all of which allow the technical teams to offload administrative tasks. Amazon Elastic Transcoder and Amazon QuickSight are used to build analytical dashboards and Amazon Redshift for data warehousing.

Check out the Beekeeper blog to keep up with their latest news!

Betterment (New York, NY)
Betterment logo
Betterment is on a mission to make investing easier and more accessible for everyone, no matter their financial goal. In 2008, Jon Stein founded Betterment with the intent to reinvent the industry and save future investors from making the same common mistakes he had been making. At that time, most people only had a couple of options when it came to investing their money – either do it yourself or hire another person to do it for you. Unfortunately, financial advisors are sometimes paid to recommend certain investments even if it’s not what is best for their clients. Betterment only chooses investments that are in their customer’s best interest and align with their financial goals. Today, they are the largest, independent online investment advisor managing more than $8 billion in assets for over 240,000 customers.

Betterment uses technology to make investing easier and more efficient, while also helping to increase after-tax returns. They offer a wide range of financial planning services that are personalized to their customer’s life goals. To start an investment plan, customers can input their age, retirement status, and annual income and Betterment will recommend how much money to invest and which type of account is the right choice. They will invest and manage it in a way that many traditional investment services can’t at a lower cost.

The engineers at Betterment are constantly working to build industry-changing technology as quickly as possible to help customers maximize their money. AWS gives Betterment the flexibility to easily provision infrastructure and offload functions to various services that once required entire teams to manage. When they first started in the cloud, Betterment was using standard implementations of Amazon EC2, Amazon RDS, and Amazon S3. Since they’ve gone all in with AWS, they have been leveraging services like Amazon Redshift, AWS Lambda, AWS Database Migration Service, Amazon Kinesis, Amazon DynamoDB, and more. Today, they are using over 20 AWS services to develop, test, and deploy features and enhancements on a daily basis.

Learn more about Betterment here.

ClearSlide (San Francisco, CA)
ClearSlide is one of today’s leading sales engagement platforms, offering a complete and integrated tool that makes every customer interaction successful. Since their founding in 2009, ClearSlide has looked for ways to improve customer experiences and have developed numerous enablement tools for sales leaders and teams, marketing, customer support teams, and more. The platform puts content, communication channels, and insights at their customer’s fingertips to help drive better decisions and manage opportunities. ClearSlide serves thousands of companies including Comcast, the Sacramento Kings, The Economist, and so far their customers have generated over 750 million minutes of engagement!

ClearSlide offers a solution for all parts of the sales process. For sales leaders, ClearSlide provides engagement dashboards to improve deal visibility, coaching, and sales forecast accuracy. For marketing and sales enablement teams, they guide sellers to the right content, at the right time, in the right context, and provide insight to maximize content ROI. For sales reps, ClearSlide integrates communications, content, and analytics in a single platform experience. Communications can be made across email, in-person or online meetings, web, or social. Today, ClearSlide customers report a 10-20% increase in closed deals, 25% decrease in onboarding time for new reps, and a 50-80% reduction in selling costs.

ClearSlide uses a range of AWS services, but Amazon EC2 and Amazon RDS have made the biggest impact on their business. EC2 enables them to easily scale compute capacity, which is critical for a fast-growing startup. It also provides consistency during deployment – from development and integration to staging and production. RDS reduces overhead and allows ClearSlide to scale their database infrastructure. Since AWS takes care of time-consuming database management tasks, ClearSlide sees a reduction in operations costs and can focus on being more strategic with their customers.

Watch this video to learn how LiveIntent reduced sales cycles by 22% using ClearSlide. Get all the latest updates by following them on Twitter!

Thanks for checking out another month of awesome AWS-powered startups!

-Tina

 

New – Server-Side Encryption for Amazon Simple Queue Service (SQS)

by Jeff Barr | on | in Amazon Simple Queue Service (SQS), Key Management Service | | Comments

As one of the most venerable members of the AWS family of services, Amazon Simple Queue Service (SQS) is an essential part of many applications. Presentations such as Amazon SQS for Better Architecture and Massive Message Processing with Amazon SQS and Amazon DynamoDB explain how SQS can be used to build applications that are resilient and highly scalable.

Today we are making SQS even more useful by adding support for server-side encryption. You can now choose to have SQS encrypt messages stored in both Standard and FIFO queues using an encryption key provided by AWS Key Management Service (KMS). You can choose this option when you create your queue and you can also set it for an existing queue.

SSE encrypts the body of the message, but does not touch the queue metadata, the message metadata, or the per-queue metrics. Adding encryption to an existing queue does not encrypt any backlogged messages. Similarly, removing encryption leaves backlogged messages encrypted.

Creating an Encrypted Queue
The newest version of the AWS Management Console allows you to choose between Standard and FIFO queues using a handy graphic:

You can set the attributes for the queue and the optional Dead Letter Queue:

And you can now check Use SSE and select the desired key:

You can use the AWS-managed Customer Master Key (CMK) which is unique for each customer and region, or you can create and manage your own keys. If you choose to use your own keys, don’t forget to update your KMS key policies so that they allow for encryption and decryption of messages.

You can also configure the data reuse period. This interval controls how often SQS refreshes cryptographic information from KMS, and can range from 1 minute up to 24 hours. Using a shorter interval can improve your security profile, but increase your KMS costs.

To learn more, read the SQS SSE FAQ and the documentation for Server-Side Encryption.

Available Now
Server-side encryption is available today in the US West (Oregon) and US East (Ohio) Regions, with support for others in the works.

There is no charge for the use of encryption, but you will be charged for the calls that SQS makes to KMS. To learn more about this, read How do I Estimate My Customer Master Key (CMK) Usage Costs.

Jeff;

Use Amazon WorkSpaces on Your Samsung Galaxy S8/S8+ With the New Samsung DeX

by Jeff Barr | on | in Amazon WorkSpaces | | Comments

It is really interesting to watch as technology evolves and improves. For example, today’s mobile phones offer screens with resolution that rivals a high-end desktop, along with multiple connectivity options and portability.

Earlier this week I had the opportunity to get some hands-on experience with the brand-new Samsung Galaxy S8+ phone and a unique new companion device called the Samsung DeX Station. I installed the Amazon WorkSpaces client for Android tablet on the phone, entered the registration code for my WorkSpace, and logged in. You can see all of this in action in my new video:

DeX includes USB connectors for your keyboard and mouse, and can also communicate with them using Bluetooth. It also includes a cooling fan, a fast phone charger, plus HDMI and Ethernet ports (You can also use your phone’s cellular or Wi-Fi connections).

Bring it all together and you can get to your cloud-based desktop from just about anywhere. Travel light, use the TV / monitor in your hotel room, and enjoy full access to your corporate network, files, and other resources.

Jeff;

PS – If you want to know more about my working environment, check out I Love my Amazon WorkSpace.

File Interface to AWS Storage Gateway

by Jeff Barr | on | in Amazon S3, AWS re:Invent, AWS Storage Gateway | | Comments

I should probably have a blog category for “catching up from AWS re:Invent!” Last November we made a really important addition to the AWS Storage Gateway that I was too busy to research and write about at the time.

As a reminder, the Storage Gateway is a multi-protocol storage appliance that fits in between your existing applications and the AWS Cloud. Your applications and your client operating systems see the gateway as (depending on the configuration), a file server, a local disk volume, or a virtual tape library (VTL). Behind the scenes, the gateway uses Amazon Simple Storage Service (S3) for cost-effective, durable, and secure storage. Storage Gateway caches data locally and uses bandwidth management to optimize data transfers.

Storage Gateway is delivered as a self-contained virtual appliance that is easy to install, configure, and run (read the Storage Gateway User Guide to learn more). It allows you to take advantage of the scale, durability, and cost benefits of cloud storage from your existing environment. It reduces the process of moving existing files and directories into S3 to a simple drag and drop (or a CLI-based copy).

As is the case with many AWS services, the Storage Gateway has gained many features since we first launched it in 2012 (The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage). At launch, the Storage Gateway allowed you to create storage volumes and to attach them as iSCSI devices, with options to store either the entire volume or a cache of the most frequently accessed data in the gateway, all backed by S3. Later, we added support for Virtual Tape Libraries (Create a Virtual Tape Library Using the AWS Storage Gateway). Earlier this year we added read-only file shares, user permission squashing, and scanning for added and removed objects.

New File Interface
At AWS re:Invent we launched a third option, and that’s what I’d like to tell you about today. You can now use the Storage Gateway as a virtual file server that you can mount on your on-premises servers and desktops. After you set it up in your data center or in the cloud, your configured buckets will be available as NFS mount points. Your application simply reads and writes files and directories over NFS; behind the scenes, the gateway turns these operations into object-level requests on your S3 buckets, where they are accessible natively (one S3 object per file). To create a file gateway, you simply visit the Storage Gateway Console, click on Get started, and choose File gateway:

Then choose your host platform: VMware ESXi or Amazon EC2:

I expect many of our customers to host the Storage Gateway on premises and to use it as a permanent or temporary bridge to the cloud. Use cases for this option include simplified backups, migration, archiving, analytics, storage tiering, and compute-intensive cloud-based processing. Once the data is in the cloud, you can take advantage of many features of S3 including multiple storage tiers (Infrequent Access and Glacier are great for archiving), storage analytics, tagging, and the like.

I don’t have much data on-premises so I’m going to run the Storage Gateway on an EC2 instance for this post. I launched the instance and set it up per the instructions on the screen, taking care to create the proper inbound security group rules (port 80 for HTTP access and port 2049 for NFS). I added 150 GiB of General Purpose SSD storage to be used as a cache:

After the instance launched I captured its public IP address and used it to connect to my newly launched gateway:

I set the time zone and assigned a name to my gateway and clicked on Activate gateway:

Then I configured the local storage as a cache, and clicked on Save and continue:

My gateway was up and running, and I could see it in the console:

Next, I clicked on Create file share to create an NFS share and associate it with an S3 bucket:

As you can see, I had the opportunity to choose my storage class (Standard or Standard – Infrequent Access in accord with my needs and my use case). The gateway needs to be able to upload files into my bucket; clicking on Create a new IAM role will create a role and a policy (read Granting Access to an Amazon S3 Destination to learn more).

I review my settings and click on Create file share:

By the way, Root squash is a feature of the AWS Storage Gateway, not a vegetable. When enabled (as it is by default) files that arrive as owned by root (user id 0) are mapped to user id 65534 (traditionally known as nobody). I can also set up default permissions for new files and new directories.

My new share is visible in the console, and available for use within seconds:

The console displays the appropriate mount commands for Linux, Microsoft Windows, and macOS. Those commands use the private IP address of the instance; in many cases you will want to use the public address instead (needless to say, you should exercise extreme care when you create a public NFS share, and maintain close control over the IP addresses that are allowed to connect).

I flipped over to the S3 console and inspected the bucket (jbarr-gw-1), finding it empty, as expected:

Then I turned to my EC2 instance, mounted the share, and copied some files to it:

I returned to the console and found a new folder (jeff_code) in my bucket, as expected. I ventured inside and found the files that I had copied to the share:

As you can see, my files are copied directly into S3 and are simply regular S3 objects. This means that I can use my existing S3 tools, code, and analytics to process them. For example:

  • Analytics – The new S3 metrics and analytics can be used to analyze the entire bucket or any directory tree within it:
  • CodeAWS Lambda and Amazon Rekognition can be used to process uploaded images; see Serverless Photo Recognition for some ideas and some code. I could also use Amazon Elasticsearch Service to index some or all of the files or Amazon EMR to process massive amounts of data.
  • Tools – I can process the existing objects in the bucket and I can also create new ones using the the S3 APIs. Any code or script that creates or removes should call the RefreshCache function to synchronize the contents of any gateways attached to the bucket (I can create a multi-site data distribution workflow by pointing multiple read-only gateways at the same bucket). I can also make use of existing, file-centric backup tools by using the share as the destination for my backups.

The gateway stores all of the file metadata (owner, group, permissions, and so forth) as S3 metadata:

Storage Gateway Resources
Here are some resources that will help you to learn more about the Storage Gateway:

PresentationDeep Dive on the AWS Storage Gateway:

White PaperFile Gateway for Hybrid Architectures – Overview and Best Practices:

Recent Videos:

Available Now
This cool AWS feature has been available since last November!

Jeff;

 

AWS Enables Consortium Science to Accelerate Discovery

by Jeff Barr | on | in Guest Post | | Comments

My colleague Mia Champion is a scientist (check out her publications), an AWS Certified Solutions Architect, and an AWS Certified Developer. The time that she spent doing research on large-data datasets gave her an appreciation for the value of cloud computing in the bioinformatics space, which she summarizes and explains in the guest post below!

Jeff;


Technological advances in scientific research continue to enable the collection of exponentially growing datasets that are also increasing in the complexity of their content. The global pace of innovation is now also fueled by the recent cloud-computing revolution, which provides researchers with a seemingly boundless scalable and agile infrastructure. Now, researchers can remove the hindrances of having to own and maintain their own sequencers, microscopes, compute clusters, and more. Using the cloud, scientists can easily store, manage, process and share datasets for millions of patient samples with gigabytes and more of data for each individual. As American physicist, John Bardeen once said: “Science is a collaborative effort. The combined results of several people working together is much more effective than could be that of an individual scientist working alone”.

Prioritizing Reproducible Innovation, Democratization, and Data Protection
Today, we have many individual researchers and organizations leveraging secure cloud enabled data sharing on an unprecedented scale and producing innovative, customized analytical solutions using the AWS cloud.  But, can secure data sharing and analytics be done on such a collaborative scale as to revolutionize the way science is done across a domain of interest or even across discipline/s of science? Can building a cloud-enabled consortium of resources remove the analytical variability that leads to diminished reproducibility, which has long plagued the interpretability and impact of research discoveries? The answers to these questions are ‘yes’ and initiatives such as the Neuro Cloud Consortium, The Global Alliance for Genomics and Health (GA4GH), and The Sage Bionetworks Synapse platform, which powers many research consortiums including the DREAM challenges, are starting to put into practice model cloud-initiatives that will not only provide impactful discoveries in the areas of neuroscience, infectious disease, and cancer, but are also revolutionizing the way in which scientific research is done.

Bringing Crowd Developed Models, Algorithms, and Functions to the Data
Collaborative projects have traditionally allowed investigators to download datasets such as those used for comparative sequence analysis or for training a deep learning algorithm on medical imaging data. Investigators were then able to develop and execute their analysis using institutional clusters, local workstations, or even laptops:

This method of collaboration is problematic for many reasons. The first concern is data security, since dataset download essentially permits “chain-data-sharing” with any number of recipients. Second, analytics done using compute environments that are not templated at some level introduces the risk of variable analytics that itself is not reproducible by a different investigator, or even the same investigator using a different compute environment. Third, the required data dump, processing, and then re-upload or distribution to the collaborative group is highly inefficient and dependent upon each individual’s networking and compute capabilities. Overall, traditional methods of scientific collaboration have introduced methods in which security is compromised and time to discovery is hampered.

Using the AWS cloud, collaborative researchers can share datasets easily and securely by taking advantage of Identity and Access Management (IAM) policy restrictions for user bucket access as well as S3 bucket policies or Access Control Lists (ACLs). To streamline analysis and ensure data security, many researchers are eliminating the necessity to download datasets entirely by leveraging resources that facilitate moving the analytics to the data source and/or taking advantage of remote API requests to access a shared database or data lake. One way our customers are accomplishing this is to leverage container based Docker technology to provide collaborators with a way to submit algorithms or models for execution on the system hosting the shared datasets:

Docker container images have all of the application’s dependencies bundled together, and therefore provide a high degree of versatility and portability, which is a significant advantage over using other executable-based approaches. In the case of collaborative machine learning projects, each docker container will contain applications, language runtime, packages and libraries, as well as any of the more popular deep learning frameworks commonly used by researchers including: MXNet, Caffe, TensorFlow, and Theano.

A common feature in these frameworks is the ability to leverage a host machine’s Graphical Processing Units (GPUs) for significant acceleration of the matrix and vector operations involved in the machine learning computations. As such, researchers with these objectives can leverage EC2’s new P2 instance types in order to power execution of submitted machine learning models. In addition, GPUs can be mounted directly to containers using the NVIDIA Docker tool and appear at the system level as additional devices. By leveraging Amazon EC2 Container Service and the EC2 Container Registry, collaborators are able to execute analytical solutions submitted to the project repository by their colleagues in a reproducible fashion as well as continue to build on their existing environment.  Researchers can also architect a continuous deployment pipeline to run their docker-enabled workflows.

In conclusion, emerging cloud-enabled consortium initiatives serve as models for the broader research community for how cloud-enabled community science can expedite discoveries in Precision Medicine while also providing a platform where data security and discovery reproducibility is inherent to the project execution.

Mia D. Champion, Ph.D.

 

Amazon Inspector Update – Assessment Reporting, Proxy Support, and More

by Jeff Barr | on | in Amazon Inspector | | Comments

Amazon Inspector is our automated security assessment service. It analyzes the behavior of the applications that you run in AWS and helps you to identify potential security issues. In late 2015 I introduced you to Inspector and showed you how to use it (Amazon Inspector – Automated Security Assessment Service). You start by using tags to define the collection of AWS resources that make up your application (also known as the assessment target). Then you create a security assessment template and specify the set of rules that you would like to run as part of the assessment:

After you create the assessment target and the security assessment template, you can run it against the target resources with a click. The assessment makes use of an agent that runs on your Linux and Windows-based EC2 instances (read about AWS Agents to learn more). You can process the assessments manually or you can forward the findings to your existing ticketing system using AWS Lambda (read Scale Your Security Vulnerability Testing with Amazon Inspector to see how to do this).

Whether you run one instance or thousands, we recommend that you run assessments on a regular and frequent basis. You can run them on your development and integration instances as part of your DevOps pipeline; this will give you confidence that the code and the systems that you deploy to production meet the conditions specified by the rule packages that you selected when you created the security assessment template. You should also run frequent assessments against production systems in order to guard against possible configuration drift.

We have recently added some powerful new features to Amazon Inspector:

  • Assessment Reports – The new assessment reports provide a comprehensive summary of the assessment, beginning with an executive summary. The reports are designed to be shared with teams and with leadership, while also serving as documentation for compliance audits.
  • Proxy Support – You can now configure the agent to run within proxy environments (many of our customers have been asking for this).
  • CloudWatch Metrics – Inspector now publishes metrics to Amazon CloudWatch so that you can track and observe changes over time.
  • Amazon Linux 2017.03 Support – This new version of the Amazon Linux AMI is launching today and Inspector supports it now.

Assessment Reports
After an assessment runs completes, you can download a detailed assessment report in HTML or PDF form:

The report begins with a cover page and executive summary:

Then it summarizes the assessment rules and the targets that were tested:

Then it summarizes the findings for each rules package:

Because the report is intended to serve as documentation for compliance audits, it includes detailed information about each finding, along with recommendations for remediation:

The full report also indicates which rules were checked and passed for all target instances:

Proxy Support
The Inspector agent can now communicate with Inspector through an HTTPS proxy. For Linux instances, we support HTTPS Proxy, and for Windows instances, we support WinHTTP proxy. See the Amazon Inspector User Guide for instructions to configure Proxy support for the AWS Agent.

CloudWatch Metrics
Amazon Inspector now publishes metrics to Amazon CloudWatch after each run. The metrics are categorized by target and by template. An aggregate metric, which indicates how many assessment runs have been performed in the AWS account, is also available. You can find the metrics in the CloudWatch console, as usual:

Here are the metrics that are published on a per-target basis:

And here are the per-template metrics:

Amazon Linux 2017.03 Support
Many AWS customers use the Amazon Linux AMI and automatically upgrade as new versions become available. In order to provide these customers with continuous coverage from Amazon Inspector, we are now making sure that this and future versions of the AMI are supported by Amazon Inspector on launch day.

Available Now
All of these features are available now and you can start using them today!

Pricing is based on a per-agent, per-assessment basis and starts at $0.30 per assessment, declining to as low at $0.05 per assessment when you run 45,000 or more assessments per month (see the Amazon Inspector Pricing page for more information).

Jeff;

Announcing the AWS Chatbot Challenge – Create Conversational, Intelligent Chatbots using Amazon Lex and AWS Lambda

by Tara Walker | on | in Amazon Lex, AWS Lambda, Developers | | Comments

If you have been checking out the launches and announcements from the AWS 2017 San Francisco Summit, you may be aware that the Amazon Lex service is now Generally Available, and you can use the service today. Amazon Lex is a fully managed AI service that enables developers to build conversational interfaces into any application using voice and text. Lex uses the same deep learning technologies of Amazon Alexa-powered devices like Amazon Echo. With the release of Amazon Lex, developers can build highly engaging lifelike user experiences and natural language interactions within their own applications. Amazon Lex supports Slack, Facebook Messenger, and Twilio SMS enabling you to easily publish your voice or text chatbots using these popular chat services. There is no better time to try out the Amazon Lex service to add the gift of gab to your applications, and now you have a great reason to get started.

May I have a Drumroll please?

I am thrilled to announce the AWS Chatbot Challenge! The AWS Chatbot Challenge is your opportunity to build a unique chatbot that helps solves a problem or adds value for prospective users. The AWS Chatbot Challenge is brought to you by Amazon Web Services in partnership with Slack.



The Challenge

Your mission, if you choose to accept it, is to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend. Your submission can be a new or existing bot, however, if your bot is an existing one it must have been updated to use Amazon Lex and AWS Lambda within the challenge submission period.

 

You are only limited by your own imagination when building your solution. Therefore, I will share some recommendations to help you to get your creative juices flowing when creating or deploying your bot. Some suggestions that can help you make your chatbot more distinctive are:

  • Deploy your bot to Slack, Facebook Messenger, or Twilio SMS
  • Take advantage of other AWS services when building your bot solution.
  • Incorporate Text-To-speech capabilities using a service like Amazon Polly
  • Utilize other third-party APIs, SDKs, and services
  • Leverage Amazon Lex pre-built enterprise connectors and add services like Salesforce, HubSpot, Marketo, Microsoft Dynamics, Zendesk, and QuickBooks as data sources.

There are cost effective ways to build your bot using AWS Lambda. Lambda includes a free tier of one million requests and 400,000 GB-seconds of compute time per month. This free, per month usage, is for all customers and does not expire at the end of the 12 month Free Tier Term. Furthermore, new Amazon Lex customers can process up to 10,000 text requests and 5,000 speech requests per month free during the first year. You can find details here.

Remember, the AWS Free Tier includes services with a free tier available for 12 months following your AWS sign-up date, as well as additional service offers that do not automatically expire at the end of your 12 month term. You can review the details about the AWS Free Tier and related services by going to the AWS Free Tier Details page.

 

Can We Talk – How It Works

The AWS Chatbot Challenge is open to individuals, and teams of individuals, who have reached the age of majority in their eligible area of residence at the time of competition entry. Organizations that employ 50 or fewer people are also eligible to compete as long at the time of entry they are duly organized or incorporated and validly exist in an eligible area. Large organizations-employing more than 50-in eligible areas can participate but will only be eligible for a non-cash recognition prize.

Chatbot Submissions are judged using the following criteria:

  • Customer Value: The problem or painpoint the bot solves and the extent it adds value for users
  • Bot Quality: The unique way the bot solves users’ problems, and the originality, creativity, and differentiation of the bot solution
  • Bot Implementation: Determination of how well the bot was built and executed by the developer. Also, consideration of bot functionality such as if the bot functions as intended and recognizes and responds to most common phrases asked of it

Prizes

The AWS Chatbot Challenge is awarding prizes for your hard work!

First Prize

  • $5,000 USD
  • $2,500 AWS Credits
  • Two (2) tickets to AWS re:Invent
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

Second Prize

  • $3,000 USD
  • $1,500 AWS Credits
  • One (1) ticket to AWS re:Invent
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

Third Prize

  • $2,000 USD
  • $1,000 AWS Credits
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

 

Challenge Timeline

  • Submissions Start: April 19, 2017 at 12:00pm PDT
  • Submissions End: July 18, 2017 at 5:00pm PDT
  • Winners Announced: August 11, 2017 at 9:00am PDT

 

Up to the Challenge – Get Started

Are ready to get started on your chatbot and dive into the challenge? Here is how to get started:

Review the details on the challenge rules and eligibility

  1. Register for the AWS Chatbot Challenge
  2. Join the AWS Chatbot Slack Channel
  3. Create an account on AWS.
  4. Visit the Resources page for links to documentation and resources.
  5. Shoot your demo video that demonstrates your bot in action. Prepare a written summary of your bot and what it does.
  6. Provide a way to access your bot for judging and testing by including a link to your GitHub repo hosting the bot code and all deployment files and testing instructions needed for testing your bot.
  7. Submit your bot on AWSChatbot2017.Devpost.com before July 18, 2017 at 5 pm ET and share access to your bot, its Github repo and its deployment files.

Summary

With Amazon Lex you can build conversation into web and mobile applications, as well as use it to build chatbots that control IoT devices, provide customer support, give transaction updates or perform operations for DevOps workloads (ChatOps). Amazon Lex provides built-in integration with AWS Lambda, AWS Mobile Hub, and Amazon CloudWatch and allows for easy integrate with other AWS services so you can use the AWS platform for to build security, monitoring, user authentication, business logic, and storage into your chatbot or application. You can make additional enhancements to your voice or text chatbot by taking advantage of Amazon Lex’s support of chat services like Slack, Facebook Messenger, and Twilio SMS.

Dive into building chatbots and conversational interfaces with Amazon Lex and AWS Lambda with the AWS Chatbot Challenge for a chance to win some cool prizes. Some recent resources and online tech talks about creating bots with Amazon Lex and AWS Lambda that may help you in your bot building journey are:

If you have questions about the AWS Chatbot Challenge you can email aws-chatbot-challenge-2017@amazon.com or post a question to the Discussion Board.

 

Good Luck and Happy Coding.

Tara

AWS Online Tech Talks – April 2017

by Jeff Barr | on | in Events, Webinars | | Comments

I’m a life-long learner! I try to set aside some time every day to read about or to try something new. I grew up in the 1960’s and 1970’s, back before the Internet existed. As a teenager, access to technical information of any sort was difficult, expensive, and time-consuming. It often involved begging my parents to drive me to the library or riding my bicycle down a narrow road to get to the nearest magazine rack. Today, with so much information at our fingertips, the most interesting challenge is deciding what to study.

As we do every month, we have assembled a set of online tech talks that can fulfill this need for you. Our team has created talks that will provide you with information about the latest AWS services along with the best practices for putting them to use in your environment.

The talks are free, but they do fill up, so you should definitely register ahead of time if you would like to attend. Most of the talks are one hour in length; all times are in the Pacific (PT) time zone. Here’s what we have on tap for the rest of this month:

Monday, April 24
10:30 AM – Modernize Meetings with Amazon Chime.

1:00 PM – Essential Capabilities of an IoT Cloud Platform.

Tuesday, April 25
8:30 AM – Hands On Lab: Windows Workloads on AWS.

Noon – AWS Services Overview & Quarterly Update.

1:30 PM – Scaling to Million of Users in the Blink of an Eye with Amazon CloudFront: Are you Ready?

Wednesday, April 26
9:00 AM – IDC and AWS Joint Webinar: Getting the Most Bang for your Buck with #EC2 #Winning.

10:30 AM – From Monolith to Microservices – Containerized Microservices on AWS.

Noon – Accelerating Mobile Application Development and Delivery with AWS Mobile Hub.

Thursday, April 27
8:30 AM – Hands On Lab: Introduction to Microsoft SQL Server in AWS.

10:30 AM – Applying AWS Organizations to Complex Account Structures.

Noon – Deep Dive on Amazon Cloud Directory.

I will be presenting Tuesday’s service overview – hope to “see” you there.

Jeff;

AWS Price List API Update – Regional Price Lists

by Jeff Barr | on | in Announcements | | Comments

We launched the AWS Price List API in late 2015 (read New – AWS Price List API to learn more). Designed to power budgeting, forecasting, and cost analysis tools, this API provides prices for AWS services in JSON and CSV form, with one price list per service. You can download and process the price lists on an as-needed basis, and you can also elect to receive notification via Amazon Simple Notification Service (SNS) each time we make a price update.

Today we are making the AWS Price List API more useful by providing you with access to a separate price list for each AWS region. These lists are smaller and will download more quickly than the original, non-regionalized lists, which are still available.

To access the price lists, download the offer index:

https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/index.json.

It will contain one entry like this for each AWS service:

{
  "formatVersion" : "v1.0",
  "disclaimer" : "This pricing list is for informational purposes only. All prices are subject to the additional terms included in the pricing pages on http://aws.amazon.com. All Free Tier prices are also subject to the terms included at https://aws.amazon.com/free/",
  "publicationDate" : "2017-03-31T21:29:23Z",
  "offers" : {
    “{service code}" : {
      "offerCode" : "{service code}",
      "versionIndexUrl" : "/offers/v1.0/aws/{service code}/index.json",
      "currentVersionUrl" : "/offers/v1.0/aws/{service code}/current/index.json",
      "currentRegionIndexUrl" : "/offers/v1.0/aws/{service code}/current/region_index.json"
    }
  }
}

The currentRegionIndexUrl is relative to https://pricing.us-east-1.amazonaws.com. It points to a region index for the service. The region index contains one entry for each region where the service is available, indexed by region code (“ap-south-1”, “eu-west-2”, and so forth):

{
  "formatVersion" : "v1.0",
  "disclaimer" : "This pricing list is for informational purposes only. All prices are subject to the additional terms included in the pricing pages on http://aws.amazon.com. All Free Tier prices are also subject to the terms included at https://aws.amazon.com/free/",
  "publicationDate" : "2017-04-07T22:41:47Z",
  "regions" : {
    "ap-south-1" : {
      "regionCode" : "ap-south-1",
      "currentVersionUrl" : "/offers/v1.0/aws/AmazonEC2/20170407224147/ap-south-1/index.json"
    },
    "eu-west-2" : {
      "regionCode" : "eu-west-2",
      "currentVersionUrl" : "/offers/v1.0/aws/AmazonEC2/20170407224147/eu-west-2/index.json"
    }
}

The currentVersionUrl is also relative to https://pricing.us-east-1.amazonaws.com. Each URL points to a price list that is specific to one region and one service. The price list includes all of the SKUs that are available in the region, including data transfer in and out of the region. It also includes global SKUs that are related to the SKUs that are specific to the region.

Available Now
The new regional price lists are available now and you can start using them today at no charge. Take a look and let me know what you build!

Jeff;

 

Announcing 5 Partners That Have Achieved AWS Service Partner Status for AWS Service Catalog

by Ana Visneski | on | in AWS Marketplace | | Comments

Allison Johnson from the AWS Marketplace wrote the guest post below to share some important APN news.

-Ana


At last November’s re:Invent, we announced the AWS Service Delivery Program to help AWS customers find partners in the AWS Partner Network (APN) with expertise in specific AWS services and skills.  Today we are adding Management Tools as a new Partner Solution to the Service Delivery program, and AWS Service Catalog is the first product to launch in that category.  APN Service Catalog partners help create catalogs of IT services that are approved by the customer’s organization for use on AWS.  With AWS Service Catalog, customers and APN partners can centrally manage commonly deployed IT services to help achieve consistent governance and meet compliance requirements while enabling users to self-provision approved services.

We have an enormous APN partner ecosystem, with thousands of consulting partners, and this program simplifies the process of finding partners experienced in implementing Service Catalog.  The consulting partners we are launching this program with today are BizCloud Experts, Cloudticity, Flux7, Logicworks, and Wipro.

The process our partners go through to achieve the Service Catalog designation is not easy.  All of the partners must have publicly referenceable customers using Service Catalog, and there is a technical review to ensure that they understand Service Catalog almost as well as our own Solution Architects do!

One common theme from all of the partner’s technical applications was the requirement to help their customers meet HIPAA compliance objectives.  HIPAA requires three essential components: 1) establishment of processes, 2) enforcement of processes, and 3) separation of roles.  Service Catalog helps with all three.  Using AWS CloudFormation templates, Service Catalog Launch constraints, and identity and access management (IAM) based permission to Service Catalog portfolios, partners are able to establish and strictly enforce a process for infrastructure delivery for their customers.  Separation of roles is achieved by using Service Catalog’s launch constraints and IAM, and our partners were able to create separation of roles such that developers were agile but were only allowed to do what their policies permitted them to.

Another common theme was the use of Service Catalog for self-service discovery and launch of AWS services.  Customers can navigate to the Service Catalog to view the dashboard of products available to them, and they are empowered to launch their own resources.

Customers also use Service Catalog for standardization and central management.  The customer’s service catalog manager can manage resource updates and changes via the Service Catalog dashboard.  Tags can be standardized across instances launched from the Service Catalog.

Below are visual diagrams of Service Catalog workflows created for two of BizCloud Expert’s customers.

Ready to learn more?  If you are an AWS customer looking for a consulting partner with Service Catalog expertise, click here for more information about our launch partners.  If you are a consulting partner and would like to apply for membership, contact us.

-Allison Johnson