AWS Blog

AWS Knowledge Center Video: Preparing to Send a Snowball Back to AWS

by Jeff Barr | on | in Amazon S3, AWS Snowball | | Comments

Do you know about the AWS Support Knowledge Center? It contains answers to some of the most frequently asked questions and other requests asked of our support team. Many of the answers even include a short video that serves to illustrate the process or to provide additional info on the topic.

For example, I recently stepped in to our studio and created a new video called Preparing to Send a Snowball Back to AWS. In 90 action-packed seconds, this video shows you how to power down the Snowball, stow the cables, lock the back panel, and verify that the proper return address is on the built-in display:

Visit the Knowledge Center to see other videos and to find answers to other questions that you might have about AWS.

Jeff;

 

News from the AWS Summit in Berlin – 3rd AZ & Lightsail in Frankfurt and Another Polly Voice

by Jeff Barr | on | in Amazon Lightsail, Amazon Polly, Announcements | | Comments

We launched the AWS Region in Frankfurt in the fall of 2014 and opened the AWS Marketplace for the Region the next year.

Our customers in Germany come in all shapes and sizes: startups, mid-market, enterprise, and public sector. These customers have made great use of the new Region, building and running applications and businesses that serve Germany, Europe, and more. They rely on the broad collection of security features, certifications, and assurances provided by AWS to help protect and secure their customer data, in accord with internal and legal requirements and regulations. Our customers in Germany also take advantage of the sales, support, and architecture resources and expertise located in Berlin, Dresden, and Munich.

The AWS Summit in Berlin is taking place today and we made some important announcements from the stage. Here’s a summary:

  • Third Availability Zone in Frankfurt
  • Amazon Lightsail in Frankfurt
  • New voice for Amazon Polly

Third Availability Zone in Frankfurt
We will be opening an additional Availability Zone (AZ) in the EU (Frankfurt) Region in mid-2017 in response to the continued growth in the use of AWS. This brings us up to 43 Availability Zones within 16 geographic Regions around the world. We are also planning to open five Availability Zones in new AWS Regions in France and China later this year (see the AWS Global Infrastructure maps for more information).

AWS customers in Germany are already making plans to take advantage of the new AZ. For example:

Siemens expects to gain additional flexibility by mirroring their services across all of the AZs. It will also allow them to store all of their data in Germany.

Zalando will do the same, mirroring their services across all of the AZs and looking ahead to moving more applications to the cloud.

Amazon Lightsail in Frankfurt
Amazon Lightsail lets you launch a virtual machine preconfigured with SSD storage, DNS management, and a static IP address in a matter of minutes (read Amazon Lightsail – The Power of AWS, the Simplicity of a VPS to learn more).

Amazon Lightsail is now available in the EU (Frankfurt) Region and you can start using it today. This allows you to use it to host applications that are required to store customer data or other sensitive information in Germany.

New Voice for Amazon Polly
Polly gives you high-quality, natural-sounding male and female speech in multiple languages. Today we are adding another German-speaking female voice to Polly, bringing the total number of voices to 48:

Like the German voice of Alexa, Vicki (the new voice) is fluent and natural. Vicki is able to fluently and intelligently pronounce the Anglicisms frequently used in German texts, including the fully inflected versions. To get started with Polly, open up the Polly Console or read the Polly Documentation.

I’m looking forward to hearing more about the continued growth and success of our customers in and around Germany!

Jeff;

AWS Lambda Support for AWS X-Ray

by Randall Hunt | on | in AWS Lambda, AWS X-Ray, Java | | Comments

Today we’re announcing general availability of AWS Lambda support for AWS X-Ray. As you may already know from Jeff’s GA POST, X-Ray is an AWS service for analyzing the execution and performance behavior of distributed applications. Traditional debugging methods don’t work so well for microservice based applications, in which there are multiple, independent components running on different services. X-Ray allows you to rapidly diagnose errors, slowdowns, and timeouts by breaking down the latency in your applications. I’ll demonstrate how you can use X-Ray in your own applications in just a moment by walking us through building and analyzing a simple Lambda based application.

If you just want to get started right away you can easily turn on X-Ray for your existing Lambda functions by navigating to your function’s configuration page and enabling tracing:

Or in the AWS Command Line Interface (CLI) by updating the functions’s tracing-config (Be sure to pass in a --function-name as well):

$ aws lambda update-function-configuration --tracing-config '{"Mode": "Active"}'

When tracing mode is active Lambda will attempt to trace your function (unless explicitly told not to trace by an upstream service). Otherwise, your function will only be traced if it is explicitly told to do so by an upstream service. Once tracing is enabled, you’ll start generating traces and you’ll get a visual representation of the resources in your application and the connections (edges) between them. One thing to note is that the X-Ray daemon does consume some of your Lambda function’s resources. If you’re getting close to your memory limit Lambda will try to kill the X-Ray daemon to avoid throwing an out-of-memory error.

Let’s test this new integration out by building a quick application that uses a few different services.


As twenty-something with a smartphone I have a lot of pictures selfies (10000+!) and I thought it would be great to analyze all of them. We’ll write a simple Lambda function with the Java 8 runtime that responds to new images uploaded into an Amazon Simple Storage Service (S3) bucket. We’ll use Amazon Rekognition on the photos and store the detected labels in Amazon DynamoDB.

service map

First, let’s define a few quick X-Ray vocabulary words: subsegments, segments, and traces. Got that? X-Ray is easy to understand if you remember that subsegments and segments make up traces which X-Ray processes to generate service graphs. Service graphs make a nice visual representation we can see above (with different colors indicating various request responses). The compute resources that run your applications send data about the work they’re doing in the form of segments. You can add additional annotations about that data and more granular timing of your code by creating subsgements. The path of a request through your application is tracked with a trace. A trace collects all the segments generated by a single request. That means you can easily trace Lambda events coming in from S3 all the way to DynamoDB and understand where errors and latencies are cropping up.

So, we’ll create an S3 bucket called selfies-bucket, a DynamoDB table called selfies-table, and a Lambda function. We’ll add a trigger to our Lambda function for the S3 bucket on ObjectCreated:All events. Our Lambda function code will be super simple and you can look at it in it’s entirety here. With no code changes we can enable X-Ray in our Java function by including the aws-xray-sdk and aws-xray-sdk-recorder-aws-sdk-instrumentor packages in our JAR.

Let’s trigger some photo uploads and get a look at the traces in X-Ray.

We’ve got some data! We can click on one of these individual traces for a lot of detailed information on our invocation.

In the first AWS::Lambda segment we see the dwell time of the function, how long it spent waiting to execute, followed by the number of execution attempts.

In the second AWS::Lambda::Function segment there are a few possible subsegments:

  • The inititlization subsegment includes all of the time spent before your function handler starts executing
  • The outbound service calls
  • Any of your custom subsegments (these are really easy to add)

Hmm, it seems like there’s a bit of an issue on the DynamoDB side. We can even dive deeper and get the full exception stacktrace by clicking on the error icon. You can see we’ve been throttled by DynamoDB because we’re out of write capacity units. Luckily we can add more with just a few clicks or a quick API call. As we do that we’ll see more and more green on our service map!

The X-Ray SDKs make it super easy to emit data to X-Ray, but you don’t have to use them to talk to the X-Ray daemon. For Python, you can check out this library from rackspace called fleece. The X-Ray service is full of interesting stuff and the best place to learn more is by hopping over to the documentation. I’ve been using it for my @awscloudninja bot and it’s working great! Just keep in mind that this isn’t an official library and isn’t supported by AWS.

Personally, I’m really excited to use X-Ray in all of my upcoming projects because it really will save me some time and effort debugging and operating. I look forward to seeing what our customers can build with it as well. If you come up with any cool tricks or hacks please let me know!

– Randall

EC2 In-Memory Processing Update: Instances with 4 to 16 TB of Memory + Scale-Out SAP HANA to 34 TB

by Jeff Barr | on | in Amazon EC2, SAP HANA | | Comments

Several times each month, I speak to AWS customers at our Executive Briefing Center in Seattle. I describe our innovation process and talk about how the roadmap for each AWS offering is driven by customer requests and feedback.

A good example of this is our work to make AWS a great home for SAP’s portfolio of business solutions. Over the years our customers have told us that they run large-scale SAP applications in production on AWS and we’ve worked hard to provide them with EC2 instances that are designed to accommodate their workloads. Because SAP installations are unfailingly mission-critical, SAP certifies their products for use on certain EC2 instance types and sizes. We work directly with SAP in order to achieve certification and to make AWS a robust & reliable host for their products.

Here’s a quick recap of some of our most important announcements in this area:

June 2012 – We expanded the range of SAP-certified solutions that are available on AWS.

October 2012 – We announced that the SAP HANA in-memory database is now available for production use on AWS.

March 2014 – We announced that SAP HANA can now run in production form on cr1.8xlarge instances with up to 244 GB of memory, with the ability to create test clusters that are even larger.

June 2014 – We published a SAP HANA Deployment Guide and a set of AWS CloudFormation templates in conjunction with SAP certification on r3.8xlarge instances.

October 2015 – We announced the x1.32xlarge instances with 2 TB of memory, designed to run SAP HANA, Microsoft SQL Server, Apache Spark, and Presto.

August 2016 – We announced that clusters of X1 instances can now be used to create production SAP HANA clusters with up to 7 nodes, or 14 TB of memory.

October 2016 – We announced the x1.16xlarge instance with 1 TB of memory.

January 2017 – SAP HANA was certified for use on r4.16xlarge instances.

Today, customers from a broad collection of industries run their SAP applications in production form on AWS (the SAP and Amazon Web Services page has a long list of customer success stories).

My colleague Bas Kamphuis recently wrote about Navigating the Digital Journey with SAP and the Cloud (registration required). He discusses the role of SAP in digital transformation and examines the key characteristics of the cloud infrastructure that support it, while pointing out many of the advantages that the cloud offers in comparison to other hosting options. Here’s how he illustrates these advantages in his article:

We continue to work to make AWS an even better place to run SAP applications in production form. Here are some of the things that we are working on:

  • Bigger SAP HANA Clusters – You can now build scale-out SAP HANA clusters with up to 17 nodes (34 TB of memory).
  • 4 TB Instances – The upcoming x1e.32xlarge instances will offer 4 TB of memory.
  • 8 – 16 TB Instances – Instances with up to 16 TB of memory are in the works.

Let’s dive in!

Building Bigger SAP HANA Clusters
I’m happy to announce that we have been working with SAP to certify the x1.32xlarge instances for use in scale-out clusters with up to 17 nodes (34 TB of memory). This is the largest scale-out deployment available from any cloud provider today, and allows our customers to deploy very large SAP workloads on AWS (visit the SAP HANA Hardware directory certification for the x1.32xlarge instance to learn more). To learn how to architect and deploy your own scale-out cluster, consult the SAP HANA on AWS Quick Start.

Extending the Memory-Intensive X1 Family
We will continue to invest in this and other instance families in order to address your needs and to give you a solid growth path.

Later this year we plan to make the x1e.32xlarge instances available in several AWS regions, in both On-Demand and Reserved Instance form. These instances will offer 4 TB of DDR4 memory (twice as much as the x1.32xlarge), 128 vCPUs (four 2.3 GHz Intel® Xeon® E7 8880 v3 processors), high memory bandwidth, and large L3 caches. The instances will be VPC-only, and will deliver up to 20 Gbps of network banwidth using the Elastic Network Adapter while minimizing latency and jitter. They’ll be EBS-optimized by default, with up to 14 Gbps of dedicated EBS throughput.

Here are some screen shots from the shell. First, dmesg shows the boot-time kernel message:

Second, lscpu shows the vCPU & socket count, along with many other interesting facts:

And top shows nearly 900 processes:

Here’s the view from within HANA Studio:

This new instance, along with the certification for larger clusters, broadens the set of scale-out and scale-up options that you have for running SAP on EC2, as you can see from this diagram:

The Long-Term Memory-Intensive Roadmap
Because we know that planning large-scale SAP installations can take a considerable amount of time, I would also like to share part of our roadmap with you.

Today, customers are able to run larger SAP HANA certified servers in third party colo data centers and connect them to their AWS infrastructure via AWS Direct Connect, but customers have told us that they really want a cloud native solution like they currently get with X1 instances.

In order to meet this need, we are working on instances with even more memory! Throughout 2017 and 2018, we plan to launch EC2 instances with between 8 TB and 16 TB of memory. These upcoming instances, along with the x1e.32xlarge, will allow you to create larger single-node SAP installations and multi-node SAP HANA clusters, and to run other memory-intensive applications and services. It will also provide you with some scale-up headroom that will become helpful when you start to reach the limits of the smaller instances.

I’ll share more information on our plans as soon as possible.

Say Hello at SAPPHIRE
The AWS team will be in booth 539 at SAPPHIRE with a rolling set of sessions from our team, our customers, and our partners in the in-booth theater. We’ll also be participating in many sessions throughout the event. Here’s a sampling (see SAP SAPPHIRE NOW 2017 for a full list):

SAP Solutions on AWS for Big Businesses and Big Workloads – Wednesday, May 17th at Noon. Bas Kamphuis (General Manager, SAP, AWS) & Ed Alford (VP of Business Application Services, BP).

Break Through the Speed Barrier When You Move to SAP HANA on AWS – Wednesday, May 17th at 12:30 PM – Paul Young (VP, SAP) and Saul Dave (Senior Director, Enterprise Systems, Zappos).

AWS Fireside Chat with Zappos (Rapid SAP HANA Migration: Real Results) – Thursday, May 18th at 11:00 AM – Saul Dave (Senior Director, Enterprise Systems, Zappos) and Steve Jones (Senior Manager, SAP Solutions Architecture, AWS).

Jeff;

PS – If you have some SAP experience and would like to bring it to the cloud, take a look at the Principal Product Manager (AWS Quick Starts) and SAP Architect positions.

AWS is Streaming Live on Twitch

by Tara Walker | on | in AWS Twitch Channel, Developers | | Comments

Twitch is one of the leading community streaming video platforms today for developers, gamers, and the artists. Each day, millions visit Twitch to watch and discuss their passions by joining live sessions with other passionate online streamers. Amazon Web Services has joined the fun by adding the AWS Twitch Channel this past November to bring the latest AWS technologies to the Twitch audience. The AWS Twitch Channel hosts weekly live interactive coding and maker sessions targeted toward all levels of cloud enthusiasts.  For more information on upcoming episodes, past broadcasts, or to meet the team, visit https://aws.amazon.com/twitch/.

The AWS Twitch channel will have multiple shows throughout the year, each with various themes, broadcasters, and topics. Currently, there are two shows available for you to tune into; Live Coding with AWS and AWS Maker Studio show.

The Live Coding with AWS show features fellow technical evangelists; Randall Hunt, Julio Faerman, and Abby Fuller building apps and solutions covering practically every AWS service from the perspective of the developer. What’s great about being part of the Twitch audience for the show is that you drive the direction of the broadcast.  Additionally, guests from Amazon, AWS, and the community will join our Twitch hosts to talk about cool new projects and implementations built on the AWS platform.

The AWS Maker Studio show premieres on May 17th and will cover projects and solutions especially for the Maker in all of us. The hosts; Todd Varland, Trevor Hykes, and Anupam Mishra will be building a cloud-connected robot over the course of the first season. Watch the first episode to see the first steps, and consider following along and building your own robot.

This May, there are several exciting Twitch sessions that we invite you to join, build, code, and make with us. This month’s schedule is as follows:

Live Coding with AWS

Wednesday, May 10

Presenter: Randall Hunt

2:00 PM PT – Building Chatbots with Lex

Thursday, May 11

Presenter: Julio Faerman

8:00 AM PT – Machine Learning

Friday, May 19

Presenter: Julio Faerman

8:00 AM PT – Cloud Concepts Review

 

AWS Maker Studio

Wednesday, May 17

4:30 PM PT – Build your First Cloud Connected Robot

Wednesday, May 24

4:30 PM PT – Sensing the Environment for your First Robot

Wednesday, May 31

4:30 PM PT – Connecting Your Robot to the Cloud

 

If you are interested the latest in AWS technologies or interested in connecting with other developers in the community, tune in each week on https://twitch.tv/aws for interactive live coding with AWS experts. Also, don’t worry if you happen to miss a session, several episodes are available on demand.

We would love for you all to join the Twitch community by tuning into Twitch and the AWS Twitch Channel to stream, view, and interact with other developers, gamers, and makers while building in the cloud with us!

Hope to see you on the stream!

Tara

AWS Online Tech Talks – May 2017

by Tara Walker | on | in Events, Webinars | | Comments

Spring has officially sprung. As you enjoy the blossoming of May flowers, it may be worthy to also note some of the great tech talks blossoming online during the month of May. This month’s AWS Online Tech Talks features sessions on topics like AI, DevOps, Data, and Serverless just to name a few.

May 2017 – Schedule

Below is the upcoming schedule for the live, online technical sessions scheduled for the month of May. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts. All schedule times for the online tech talks are shown in the Pacific Time (PDT) time zone.

Webinars featured this month are:

Monday, May 15

Artificial Intelligence

9:00 AM – 10:00 AM: Integrate Your Amazon Lex Chatbot with Any Messaging Service

 

Tuesday, May 16

Compute

10:30 AM – 11:30 AM: Deep Dive on Amazon EC2 F1 Instance

IoT

12:00 Noon – 1:00 PM: How to Connect Your Own Creations with AWS IoT

Wednesday, May 17

Management Tools

9:00 AM – 10:00 AM: OpsWorks for Chef Automate – Automation Made Easy!

Serverless

10:30 AM – 11:30 AM: Serverless Orchestration with AWS Step Functions

Enterprise & Hybrid

12:00 Noon – 1:00 PM: Moving to the AWS Cloud: An Overview of the AWS Cloud Adoption Framework

 

Thursday, May 18

Compute

9:00 AM – 10:00 AM: Scaling Up Tenfold with Amazon EC2 Spot Instances

Big Data

10:30 AM – 11:30 AM: Building Analytics Pipelines for Games on AWS

12:00 Noon – 1:00 PM: Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight

 

Monday, May 22

Artificial Intelligence

9:00 AM – 10:00 AM: What’s New with Amazon Rekognition

Serverless

10:30 AM – 11:30 AM: Building Serverless Web Applications

 

Tuesday, May 23

Hands-On Lab

8:30 – 10:00 AM: Hands On Lab: Windows Workloads on AWS

Big Data

10:30 AM – 11:30 AM: Streaming ETL for Data Lakes using Amazon Kinesis Firehose

DevOps

12:00 Noon – 1:00 PM: Deep Dive: Continuous Delivery for AI Applications with ECS

 

Wednesday, May 24

Storage

9:00 – 10:00 AM: Moving Data into the Cloud with AWS Transfer Services

Containers

12:00 Noon – 1:00 PM: Building a CICD Pipeline for Container Deployment to Amazon ECS

 

Thursday, May 25

Mobile

9:00 – 10:00 AM: Test Your Android App with Espresso and AWS Device Farm

Security & Identity

10:30 AM – 11:30 AM: Advanced Techniques for Federation of the AWS Management Console and Command Line Interface (CLI)

 

Tuesday, May 30

Databases

9:00 – 10:00 AM: DynamoDB: Architectural Patterns and Best Practices for Infinitely Scalable Applications

Compute

10:30 AM – 11:30 AM: Deep Dive on Amazon EC2 Elastic GPUs

Security & Identity

12:00 Noon – 1:00 PM: Securing Your AWS Infrastructure with Edge Services

 

Wednesday, May 31

Hands-On Lab

8:30 – 10:00 AM: Hands On Lab: Introduction to Microsoft SQL Server in AWS

Enterprise & Hybrid

10:30 AM – 11:30 AM: Best Practices in Planning a Large-Scale Migration to AWS

Databases

12:00 Noon – 1:00 PM: Convert and Migrate Your NoSQL Database or Data Warehouse to AWS

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.

Tara

New – USASpending.gov on an Amazon RDS Snapshot

by Jeff Barr | on | in Amazon RDS, Guest Post, Public Data Sets | | Comments

My colleague Jed Sundwall runs the AWS Public Datasets program. He wrote the guest post below to tell you about an important new dataset that is available as an Amazon RDS Snapshot. In the post, Jed introduces the dataset and shows you how to create an Amazon RDS DB Instance from the snapshot.

Jeff;


I am very excited to announce that, starting today, the entire public USAspending.gov database is available for anyone to copy via Amazon Relational Database Service (RDS). USAspending.gov data includes data on all spending by the federal government, including contracts, grants, loans, employee salaries, and more. The data is available via a PostgreSQL snapshot, which provides bulk access to the entire USAspending.gov database, and is updated nightly. At this time, the database includes all USAspending.gov for the second quarter of fiscal year 2017, and data going back to the year 2000 will be added over the summer. You can learn more about the database and how to access it on its AWS Public Dataset landing page.

Through the AWS Public Datasets program, we work with AWS customers to experiment with ways that the cloud can make data more accessible to more people. Most of our AWS Public Datasets are made available through Amazon S3 because of its tremendous flexibility and ability to scale to serve any volume of any kind of data files. What’s exciting about the USAspending.gov database is that it provides a great example of how Amazon RDS can be used to share an entire relational database quickly and easily. Typically, sharing a relational database requires extract, transfer, and load (ETL) processes that require redundant storage capacity, time for data transfer, and often scripts to migrate your database schema from one database engine to another. ETL processes can be so intimidating and cumbersome that they’re effectively impossible for many people to carry out.

By making their data available as a public Amazon RDS snapshot, the team at USASPending.gov has made it easy for anyone to get a copy of their entire production database for their own use within minutes. This will be useful for researchers and businesses who want to work with real data about all US Government spending and quickly combine it with their own data or other data resources.

Deploying the USASpending.gov Database Using the AWS Management Console
Let’s go through the steps involved in deploying the database in your AWS account using the AWS Management Console.

  1. Sign in to the AWS Management Console and select the US East (N. Virginia) region in the menu bar.
  2. Open the Amazon RDS Console and choose Snapshots in the navigation pane.
  3. In the filter for the search bar, select All Public Snapshots and search for 515495268755:
  4. Select the snapshot named arn:aws:rds:us-east-1:515495268755:snapshot:usaspending-db.
  5. Select Snapshot Actions -> Restore Snapshot. Select an instance size, and enter the other details, then click on Restore DB Instance.
  6. You will see that a DB Instance is being created from the snapshot, within your AWS account.
  7. After a few minutes, the status of the instance will change to Available.
  8. You can see the endpoint for your database on the main page along with other useful info:

Deploying the USASpending.gov Database Using the AWS CLI
You can also install the AWS Command Line Interface (CLI) and use it to create a DB Instance from the snapshot. Here’s a sample command:

$ aws rds restore-db-instance-from-db-snapshot --db-instance-identifier my-test-db-cli \
  --db-snapshot-identifier arn:aws:rds:us-east-1:515495268755:snapshot:usaspending-db \
  --region us-east-1

This will give you an ARN (Amazon Resource Name) that you can use to reference the DB Instance. For example:

$ aws rds describe-db-instances \
  --db-instance-identifier arn:aws:rds:us-east-1:917192695859:db:my-test-db-cli

This command will display the Endpoint.Address that you use to connect to the database.

Connecting to the DB Instance
After following the AWS Management Console or AWS CLI instructions above, you will have access to the full USAspending.gov database within this Amazon RDS DB instance, and you can connect to it using any PostgreSQL client using the following credentials:

  • Username: root
  • Password: password
  • Database: data_store_api

If you use psql, you can access the database using this command:

$ psql -h my-endpoint.rds.amazonaws.com -U root -d data_store_api

You should change the database password after you log in:

ALTER USER "root" WITH ENCRYPTED PASSWORD '{new password}';

If you can’t connect to your instance but think you should be able to, you may need to check your VPC Security Groups and make sure inbound and outbound traffic on the port (usually 5432) is allowed from your IP address.

Exploring the Data
The USAspending.gov data is very rich, so it will be hard to do it justice in this blog post, but hopefully these queries will give you an idea of what’s possible. To learn about the contents of the database, please review the USAspending.gov Data Dictionary.

The following query will return the total amount of money the government is obligated to pay for contracts awarded by NASA that include “Mars” or “Martian” in the description of the award:

select sum(total_obligation) from awards, subtier_agency 
  where (awards.description like '% MARTIAN %' OR awards.description like '% MARS %') 
  AND subtier_agency.name = 'National Aeronautics and Space Administration';

As I write this, the result I get for this query is $55,411,025.42. Note that the database is updated nightly and will include more historical data in the coming months, so you may get a different result if you run this query.

Now, here’s the same query, but looking for awards with “Jupiter” or “Jovian” in the description:

select sum(total_obligation) from awards, subtier_agency
  where (awards.description like '%JUPITER%' OR awards.description like '%JOVIAN%') 
  AND subtier_agency.name = 'National Aeronautics and Space Administration';

The result I get is $14,766,392.96.

Questions & Comments
I’m looking forward to seeing what people can do with this data. If you have any questions about the data, please create an issue on the USAspending.gov API’s issue tracker on GitHub.

— Jed

Event: AWS Community Day in San Francisco

by Tara Walker | on | in Conferences & User Groups, Events | | Comments

Spring is in the air, new technologies are budding, and the community is buzzing.

Which means it’s also springtime in the city by the bay and AWS is thrilled to announce an exciting event, AWS Community Day.

AWS Community Day is a community-led event in San Francisco where AWS Community Heroes, user group leaders, and other AWS enthusiasts will come together to deliver a full day of technical sessions on the latest in cloud computing.

During the event, you will get an opportunity to learn about the latest cloud computing trends, optimization best practices, and practical insights securing your infrastructure. Additionally, you will have the chance to discuss approaches to building healthy AWS meetups and community knowledge-sharing sessions.

In order to learn more details about the AWS Community Day in San Francisco, take a look at the blog post written by Community Hero, Eric Hammond, found here: https://alestic.com/2017/05/aws-community-day-san-francisco/.

Don’t miss this great event, register today to take part in the AWS Community Day.

Tara

Amazon Chime Update – Use Your Existing Active Directory, Claim Your Domain

by Jeff Barr | on | in Amazon Chime | | Comments

I first told you about Amazon Chime this past February (Amazon Chime – Unified Communications Service) and told you how I connect and collaborate with people all over the world.

Since the launch, Amazon Chime has quickly become the communication tool of choice within the AWS team. I participate in multiple person-to-person and group chats throughout the day, and frequently “Chime In” to Amazon Chime-powered conferences to discuss upcoming launches and speaking opportunities.

Today we are adding two new features to Amazon Chime: the ability to claim a domain as your own and support for your existing Active Directory.

Claiming a Domain
Claiming a domain gives you the authority to manage Amazon Chime usage for all of the users in the domain. You can make sure that new employees sign up for Amazon Chime in an official fashion and you can suspend accounts for employees that leave the organization.

To claim a domain, you assert that you own a particular domain name and then back up the assertion by entering a TXT record to your domain’s DNS entry. You must do this for each domain and subdomain that your organization uses for email addresses.

Here’s how I would claim one of my own domains:

After I click on Verify this domain, Amazon Chime provides me with the record for my DNS:

After I do this, the domain’s status will change to Pending Verification. Once Amazon Chime has confirmed that the new record exists as expected, the status will change to Verified and the team account will become an enterprise account.

Active Directory Support
This feature allows your users to sign in to Amazon Chime using their existing Active Directory identity and credentials. After you have set it up, you can enable and take advantage of advanced AD security features such as password rotation, password complexity rules, and multi-factor authentication. You can also control the allocation of Amazon Chime’s Plus and Pro licenses on a group-by-group basis (check out Plans and Pricing to learn more about each type of license).

In order to use this feature, you must be using an Amazon Chime enterprise account. If you are using a team account, follow the directions at Create an Enterprise Account before proceeding.

Then you will need to set up a directory with the AWS Directory Service. You have two options at this point:

  1. Use the AWS Directory Service AD Connector to connect to your existing on-premises Active Directory instance.
  2. Use Microsoft Active Directory, configured for standalone use. Read How to Create a Microsoft AD Directory for more information on this option.

After you have set up your directory, you can connect to it from within the Amazon Chime console by clicking on Settings and Active directory and choosing your directory from the drop-down:

After you have done this you can select individual groups within the directory and assign the appropriate subscriptions (Plus or Pro) on a group-by-group basis.

With everything set up as desired, your users can log in to Amazon Chime using their existing directory credentials.

These new features are available now and you can start using them today!

If you would like to learn more about Amazon Chime, you can watch the recent AWS Tech Talk: Modernize Meetings with Amazon Chime:

Here is the presentation from the talk:

Jeff;

 

EC2 Price Reductions – Reserved Instances & M4 Instances

by Jeff Barr | on | in Amazon EC2, Price Reduction | | Comments

As AWS grows, we continue to find ways to make it an even better value. We work with our suppliers to drive down costs while also finding ways to build hardware and software that is increasingly more efficient and cost-effective.

In addition to reducing our prices on a regular and frequent basis, we also give customers options that help them to optimize their use of AWS. For example, Reserved Instances (first launched in 2009) allow Amazon EC2 users to obtain a significant discount when compared to On-Demand Pricing, along with a capacity reservation when used in a specific Availability Zone.

Our customers use multiple strategies to purchase and manage their Reserved Instances. Some prefer to make an upfront payment and earn a bigger discount; some prefer to pay nothing upfront and get a smaller (yet still substantial) discount. In the middle, others are happiest with a partial upfront payment and a discount that falls in between the two other options. In order to meet this wide range of preferences we are adding 3 Year No Upfront Standard Reserved Instances for most of the current generation instance types. We are also reducing prices for No Upfront Reserved Instances, Convertible Reserved Instances, and General Purpose M4 instances (both On-Demand and Reserved Instances). This is our 61st AWS Price Reduction.

Here are the details (all changes and reductions are effective immediately):

New No Upfront Payment Option for 3 Year Standard RIs – We previously offered a no upfront payment option with a 1 year term for Standard RIs. Today, we are adding a No Upfront payment option with a 3 year term for C4, M4, R4, I3, P2, X1, and T2 Standard Reserved Instances.

Lower Prices for No Upfront Reserved Instances – We are lowering the prices for No Upfront 1 Year Standard and 3 Year Convertible Reserved Instances for the C4, M4, R4, I3, P2, X1, and T2 instance types by up to 17%, depending on instance type, operating system, and region.

Here are the average reductions for No Upfront Reserved Instances for Linux in several representative regions:

US East (Northern Virginia)
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
Asia Pacific (Singapore)
C4 -11% -11% -10% -10% -9%
M4 -16% -16% -16% -16% -17%
R4 -10% -10% -10% -10% -10%

Lower Prices for Convertible Reserved Instances – Convertible Reserved Instances allow you to change the instance family and other parameters associated with the RI at any time; this allows you to adjust your RI inventory as your application evolves and your needs change. We are lowering the prices for 3 Year Convertible Reserved Instances by up to 21% for most of the current generation instances (C4, M4, R4, I3, P2, X1, and T2).

Here are the average reductions for Convertible Reserved Instances for Linux in several representative regions:

US East (Northern Virginia)
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
Asia Pacific (Singapore)
C4 -13% -13% -5% -5% -11%
M4 -19% -19% -17% -15% -21%
R4 -15% -15% -15% -15% -15%

Similar reductions will go into effect for nearly all of the other regions as well.

Lower Prices for M4 Instances – We are lowering the prices for M4 Linux instances by up to 7%.

Visit the EC2 Reserved Instance Pricing Page and the EC2 Pricing Page, or consult the AWS Price List API for all of the new prices.

Learn More
The following blog posts contain additional information about some of the improvements that we have made to the EC2 Reserved Instance model:

You can also read AWS Pricing and the Reserved Instances FAQ to learn more.

Jeff;