AWS Startups Blog

Internet of Things — Part 2b

By Brett Francis, Enterprise Solutions Architect, AWS


 

Sensor data with Raspberry Pi and AWS, by two AWS Solutions Architects

“How can we show off plumbing?”

A few weeks before the AWS SF Summit, this was the question on everyone’s mind.

As described by Todd in Part 2a of this series, Low Performance Computing (LPC) platforms such as Arduino and the Raspberry Pi are increasingly popular platforms for purpose-specific solutions. But having a Raspberry Pi solely sending telemetry data to a streaming data solution such as Amazon Kinesis just doesn’t demo well. So in this post we will focus on the straightforward “no compute necessary” single-page application and composite solution created to enable session participants to see the flow of data from an audio sensor attached to the Raspberry Pi.

The No Compute approach is so named because there is no need to run a compute instance even though an entire composite back-end is assembled to support the single-page application. The composite solution incorporates the following components:

And here is the order of the back-end service assembly:

  1. Create an Amazon Kinesis stream.
  2. Set up an IAM user to enable Raspberry Pi POST access to the stream.
  3. Set up an IAM role to enable interaction with GET access to the stream.
  4. Host a static website on Amazon S3 and place the single-page application files in the corresponding bucket.
  5. Create a Login with Amazon application that pairs a user’s Amazon account with the IAM role that enables retrieving records from a Kinesis stream.
  6. Create a single-page application (SPA) using the AWS SDK for JavaScript in the Browser.

Creating the Kinesis Stream

In order to catch all the sensor data in a highly durable way that supports downstream fan-out, we create a Kinesis stream as explained in the documentation. When creating a Kinesis stream, the main initial question is “How big to start?” Since this demo anticipates only a few inbound sources of data and a few simultaneous JavaScript browser-based clients, we start with the smallest configuration possible, a single shard named demo-stream, supporting 1 MB/sec PUT and 2 MB/sec GET. At the time of this posting, a single Kinesis stream shard costs $0.015 per hour and $0.028 per 1M PUT transactions.

Provisioning IAM

The next task is to set up both a secure way for the Raspberry Pi clients to send data to the stream and a secure way for each user to view the data flowing through the stream.

Using the IAM console, we create a summit-sensor user and attach the following policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kinesis:DescribeStream",
        "kinesis:PutRecord"
      ]
      "Resource": [
        "arn:aws:kinesis:<region>:<accountID>:stream/<streamname>"
      ]
    }
  ]
}

To determine the specific ARN of the stream to be placed in the policy, you need the following command, the AWS CLI, and a developer account with describe-stream access:

$> aws kinesis describe-stream --stream-name <streamname> --region <region>

The creator of the summit-sensor user downloads the user’s API keys and places them on the Raspberry Pi following the instructions for configuring the AWS SDK for Python (boto). Although we did this for the demo, the storing of API keys on the device itself is not recommended. Instead, use some other mechanisms for the Raspberry Pi to receive injected credentials or to retrieve credentials from a well-defined “off-box” location.

Also using the IAM console, we create a web-spa role and attach the following policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
     "Effect": "Allow",
     "Action": [
       "kinesis: Get*"
       "kinesis:DescribeStream"
     ],
     "Resource": [
       "arn:aws:kinesis:<region>:<accountID>:stream/<streamname>"
     ]
   },
 ]
}

Configuring S3 Static Website Hosting

All the code we need to access the Amazon Kinesis stream resides in a JavaScript-based single-page application running in a web browser. A simple way of serving up the static HTML and JavaScript is to create a static website on Amazon S3 as explained in the documentation. For the sake of continued discussion, let’s assume the Amazon S3 static website created is named spa-bucket.

A single file stream_reader_index.html (Gist) containing both the HTML and JavaScript is then placed into the spa-bucket bucket and configured as the index document. We also upload a demo privacy.html file to meet the requirement of an Amazon S3 static website.

Creating a Login with Amazon Application

Now we start to join things up for development and final deployment. This requires configuration of a Login with Amazon application, named spa-app, which we create as explained in the documentation.

By configuring the spa-app application, we start to form the connective tissue of the composite solution.

Privacy URL example:

Allowed JavaScript Origins (optional):

Following the IAM documentation, we use the Client ID from the spa-app amzn1.application-oa2-client.XXXXXXXXYYYYYYYYZZZZZZZZYYYYYYYY
to configure the trust relationship between the previously created web-spa role and the newly created Login with Amazon application.

We used the following policy document for the trust relationship:

{
  “Version”: “2012-10-17”,
  “Statement”: [
    {
      “Sid”: “”,
      “Effect”: “Allow”,
      “Principal”: {
        “Federated”: “www.amazon.com”
      },
      “Action”: “sts:AssumeRoleWithWebIdentity”,
      “Condition”: {
        “StringEquals”: {
          “www.amazon.com:app_id”: 
          “amzn1.application-oa2-client.XXXXXXXXYYYYYYYYZZZZZZZZYYYYYYYY”
        }
      }
    }
  ]
}

Creating the Single-page Application

Now with the initial connective tissue set up for the single-page composite application, we can assemble the remaining composite functionality.

The important functions in the JavaScript are as follows:

document.getElementById('LoginWithAmazon').onclick = function() {
    options = { scope : 'profile' };
    amazon.Login.authorize(options, amazonAuth);
};

This logically connects the document’s Login button click action using the amazon.Login.authorize() function and the amazonAuth() function.

Next we logically connect the amazonAuth() response with the Amazon Kinesis getStream() function:

function amazonAuth(response) {
    ..snip..
    AWS.config.credentials = new AWS.WebIdentityCredentials({
        RoleArn: roleArn,
        ProviderId: 'www.amazon.com',
        WebIdentityToken: response.access_token
    });
    AWS.config.region = awsRegion;
    kinesis = new AWS.Kinesis();
    amazon.Login.retrieveProfile(response.access_token, getStream);
 }

Finally, we connect the getStream() and getShardIterator() behavior with the retrieval and charting of records using the getRecords() function:

function getStream() {
    kinesis.describeStream({StreamName: streamName}, 
        ..snip..
        kinesis.getShardIterator({
            StreamName:streamName,
            ShardId:data.StreamDescription.Shards[0].ShardId,
            ShardIteratorType:’LATEST’
        }, getRecords);
    });
 }
 function getRecords(err, data) {
     ..snip..
     kinesis.getRecords({
       ShardIterator:data.ShardIterator,Limit:100}, 
       function (err, data) {
         ..snip..
         for (var record in data.Records) {
           ..snip..

The entire JavaScript logic is here in the Gist of the file stream_reader_index.html.

Results and Conclusion

After some days of tweaking that were much more HTML and charting polish than Kinesis-related JavaScript, the demo presenter launched a browser, navigated to the Amazon S3 static website URL, and entered the proper Login with Amazon credentials. The empty chart was shown and the crowd, along with the creators of the single-page application, all wondered if this would work.

“Okay, everybody, let’s make some noise!”

After the hooting and hollering, all the powerful plumbing, assembled as the connective tissue to create a visually simple single page application, flexed and displayed the audio data on the screen. The simple display was accomplished with only a single point of scaling to consider, the Amazon Kinesis stream, and without a single server-side compute instance ever used.

The simple blinking lights of the Raspberry Pi paired with the JavaScript app had just ushered in an era of massively scalable and yet simpler-than-ever-to-scale solutions that support the new challenges brought about by the Internet of Things.

This is just the beginning.