The Internet of Things on AWS – Official Blog

How to Use Substitution Templates in the Rules Engine to Enrich IoT Messages

by Olawale Oladehin | on | in AWS IoT | Permalink | Comments |  Share

Post by Marcos Ortiz, an AWS Solutions Architect

The AWS IoT platform allows you to use substitution templates to augment the JSON data returned when a rule is triggered and AWS IoT performs a rule action. The syntax for a substitution template is ${expression}, where expression can be any expression supported by AWS IoT in SELECT or WHERE clauses. For more information about supported expressions, see AWS IoT SQL Reference.

The substitution template is an important feature for AWS IoT customers, especially when there’s a need  to dynamically add some contextual information that is not stored in the payload but is part of the MQTT communication (for example, the MQTT client ID or the MQTT topic structure) to some of your actions.

Background

In this blog post, we use a fictitious company called ACME Offshore, a rig contractor that leases drilling rigs to oil and gas operators. ACME Offshore wants to differentiate itself from its competitors by implementing an ambitious plan to transform its rigs into what they call next generation rigs. The idea is to provide all the rig sensors data in near real-time to its customers. In this post, we will show how to leverage the AWS IoT substitution templates in IoT rules so you can dynamically configure your IoT actions with functions provided by the AWS IoT SQL Reference.

We provide an AWS CloudFormation template that will allow you to create all the AWS resources required to run a demo. At the end of the post, we provide instructions for automatically deleting all the resources created.

Architecture

The following diagram shows the overall architecture.

As the rigs operate, thousands of sensors are continuously generating data. That data is being collected, aggregated, used locally on the rig, and sent to the AWS IoT platform. All the data sent by a rig will go to a “myrigs/id” topic, where “id” is the unique identifier for the rig. The rig can send two types of data: data points and events.

The following is an example of a payload sent by a rig:

{
    "datapoints":[
        {"recorded_at":1498679312, "well_depth":10, "bit_depth":0.0},
        {"recorded_at":1498679313, "well_depth":10, "bit_depth":0.1},
        {"recorded_at":1498679314, "well_depth":10, "bit_depth":0.2}
    ],
    "events": {
        "errors": [
            {"recorded_at":1498679312, "code":1001, "message":"Error 1001"},
            {"recorded_at":1498679313, "code":1002, "message":"Error 1002"}
        ],
        "warnings": [
            {"recorded_at":1498679313, "code":1003, "message":"Error 1003"},
            {"recorded_at":1498679314, "code":1004, "message":"Error 1004"}
        ],
        "infos": [
            {"recorded_at":1498679314, "code":1005, "message":"Error 1005"},
            {"recorded_at":1498679314, "code":1006, "message":"Error 1006"}
        ]
    }
}

Each payload can have a combination of data points and events. There are three AWS IoT rules to process the data coming from the rigs.

1. Data Points Rule

The data points rule subscribes to the “myrigs/+” topic so it will be able to augment data points sent by any rig. It matches on the MQTT topic only and triggers two IoT actions when new data points are available. The “+” and the “#” characters are wildcards that can be used to subscribe to IoT topics. For more information about topic wildcards, see Topics in the AWS IoT Developer Guide.

1.1 Anomalies Action

This action sends all the data points to an Amazon Kinesis stream. An AWS Lambda function reads from that stream in order to detect any anomalies in the data points. In the demo portion of this post, the Lambda function checks for the following scenarios:

  • bit_depth values less than 0.
  • well_depth values less than 0.
  • bit_depth values greater than well_depth values, where both bit_depth and well_depth are greater than 0.

When an anomaly is detected, the Lambda function writes it to a DynamoDB table. Recording data anomalies is important not just for sensor maintenance and quality control, but also for operations and security.

1.2 Firehose Action

This action sends all the data points to an Amazon Kinesis Firehose delivery stream. The purpose of the Data Points IoT rule is to create a data lake of rig telemetry in near real time. That data can be used later on for replay purposes. The ability to replay data makes it possible to reprocess the data against new versions of data point-consuming systems. It is also important for auditing and reporting.

2. Events Rule

The events rule subscribes to the “myrigs/+” topic so it can process all events being sent by any rig. It queries only the events portion of the payload and triggers one AWS IoT action when new events are available.

Events can be generated manually or automatically in cases like the following:

  • A rig operator requests support from the onshore team.
  • The rig state changes.
  • Pumps are turned on.

2.1 Events Action

This action sends all the events received to a Kinesis Firehose delivery stream. Events can be stored in a S3 bucket.

3. Error Events Rule

The error events rule subscribes to the “myrigs/+” topic so it can process all error events sent by any rig. It queries only the error events portion of the payload and triggers two AWS IoT actions when new error events are available.

3.1 Republish Action

This action republishes all error events coming from a given rig to a specific MQTT topic. For example, error events coming from rig 99 will be republished to “myrigs/99/errors”. This allows monitoring systems and remote support drilling engineers to be notified in real time of any errors occurring on rigs. All that’s required is to subscribe to the error event topic.

Systems can receive all errors coming from all rigs by subscribing to the “myrigs/+/errors” topic.

3.2 Notification Action

This action routes all error events to an Amazon SNS topic named “acme-rigs”. This allows the same remote support drilling engineers to receive notification (e-mail or text) even if they are not in front of a computer. Amazon SNS can also notify external monitoring systems through, for example, an HTTP callback request whenever error events are received for a given rig.

Provisioning the Demo

Click the Launch Stack button below.

This will redirect you to the AWS CloudFormation console. This demo is deployed in the US East (Northern Virginia) Region. Click Next.

On the Specify Details page, type the following. For SnsSubscriberEmail, type your e-mail address so you can receive e-mail notifications from Amazon SNS. Click Next.

You can customize options (tags, permissions, notifications) on the following page or simply click Next to continue with the default options.

On the Review page, select the I acknowledge that AWS CloudFormation might create IAM resources box, and then click Create to launch the AWS CloudFormation stack.

After a few minutes, the provisioning should be complete.

Select the AWS CloudFormation stack, and on the Outputs tab, copy the value for the S3BucketName key. Our Kinesis Firehose delivery stream will write data points and events to this bucket.

After the provisioning is complete, you will receive an e-mail from Amazon SNS.

Click Confirm subscription so you can receive emails from Amazon SNS whenever a rig sends error events.

Before we start testing the demo, let’s review the AWS IoT substitution templates. On the Rules page in the AWS IoT console, you will see the three rules we created.

On the acmeRigDatapoints AWS IoT rule, we use the newuuid() AWS IoT function to set the value of our Kinesis Streams partition key. The newuuid() function returns a random 16-byte UUID, so no matter how many payloads AWS IoT receives, we will always be evenly distributing traffic between all the shards of our Kinesis stream.

We also use the topic AWS IoT function on a query statement so we can add the rig_id information when writing the data points to DynamoDB or S3.

On the acmeRigAllEvents AWS IoT rule, we only use the topic function on the query statement, so we can add the rig_id information when writing the events to the Amazon S3 bucket.

On the acmeRigErrorEvents, we use the topic function to dynamically set the republishing topic for our AWS IoT republish action. This allows us to dynamically republish any errors published to the “myrigs/id” topic to the “myrigs/id/errors” topic. For example, rig 99 sends payloads to “myrigs/99” and any errors are republished to “myrigs/99/errors”. If we are talking about rig 5, those topics would be “myrigs/5” and “myrigs/5/errors”, respectively.

We also use the same topic function to add rig_id context to the payload of our SNS notification.

Testing the Demo

Now you should be all set to test the demo. On the Test page in the AWS IoT console, in Subscription topic, type “myrigs/1” and then click Subscribe to topic.

Follow the same steps to subscribe to the “myrigs/1/errors” topic. We want to subscribe to this topic so we can test our AWS IoT republish action.

To simulate a rig sending a payload to your AWS IoT endpoint, copy the following JSON payload to your clipboard. (In this case, the rig ID is 1.)

The sample payload we are using has the following anomalies:

  1. At 1498679312, well_depth is less than 0.
  2. At 1498679313, bit_depth is less than 0.
  3. At 1498679314, bit_depth is greater than well_depth

Click the “myrigs/1” topic, delete all the context in the text area, and then paste the payload you just copied to your clipboard into that text area. Now click Publish to topic.

If you click on the “myrigs/1/errors” topic, you should be able to see that it received the errors you published on the payload.

Navigate to the DynamoDB table. On the Items tab, you will be able to see these three anomalies saved on that table.

Check the email you used on our AWS CloudFormation stack. You should receive a message with the errors we sent on our sample payload:

After you publish the test payload, it should take about one minute for Amazon Kinesis Firehose to write the data points and events to the S3 bucket.

Cleaning Up

After you finish testing, be sure to clean up your environment so you are not charged for unused resources.

Go to the Amazon S3 console and delete the bucket contents.

Go to the AWS CloudFormation console, select the “acme-iot” stack, and then click Delete Stack.

Conclusion

We hope you found this demo useful. Feel free to leave your feedback in the comments.

 

Bites of IoT: Creating AWS IoT Rules with AWS CloudFormation

Welcome to another installment in the Bites of IoT blog series.

In this bite, we will use AWS CloudFormation, the AWS IoT rules engine, and AWS Lambda to automate the setup and teardown of two AWS IoT rules.  You can use the AWS IoT console to create and edit rules by hand, but in production you might want to use automation to make your deployments of rules repeatable and easier to manage.  AWS CloudFormation enables you to deploy rules consistently across applications, manage updates, share your infrastructure with others, and even use revision control to track changes made over time.

Configure the CLI

As with all Bites of IoT posts, we are using the AWS IoT ELF client available in AWS Labs on GitHub. If you aren’t familiar with the ELF, see the first post in this series.

What Are Rules?

AWS IoT rules are SQL statements that can be used to perform three kinds of functions on MQTT messages:

  • Test: A rule can test an MQTT message to determine if it meets some criteria.  For example, a rule could check to see if a temperature field is above or below a threshold, or if a text field contains a certain string.
  • Transform: A rule can pass an MQTT message through without changing it or it can transform it in some way.  There are several SQL functions to support transformations.  For more information, see the AWS IoT SQL Reference.  Common transformations include changing a value from one system of measurement to another (e.g. Fahrenheit to Celsius), hashing sensitive information to obscure it from downstream systems (e.g. MD2, MD5, SHA1, SHA224, SHA256, SHA384, SHA512), removing information when it isn’t useful in another data processing stage, or adding information required by other processes (e.g. timestamps).
  • Trigger: When a rule is evaluated and its test criteria (if any) is met, it triggers an action.  The examples in this post cover the republish action and the AWS Lambda action.  The republish action takes a message and republishes it to another topic.  The AWS Lambda action sends the message to an AWS Lambda function.  There are several other actions available that you can use to tie into other AWS services.

Creating a SQL-Only Rule with AWS CloudFormation

The first rule we are going to create will receive the “IoT ELF hello” message from ELF.  It will republish a new message on an output topic that indicates which client sent the message.  We will create the rule using the SQL rules engine in AWS IoT.

Here is the SQL statement for our first rule:

SELECT concat(topic(2), " says hello!") AS feedback FROM 'input/#'

Let’s walk through each part to see what it does:

  • SELECT – All SQL statements in the rules engine start with SELECT.
  • concat(topic(2), " says hello!") – The concat function combines two strings.
    • The first string is obtained from the topic(2) function, which means it uses the second segment of the topic.  Because ELF publishes messages on input/thing_X where X is the thing’s ID, this string will be either thing_0 or thing_1.
    • The second string is simply  says hello!.  This part of the statement will evaluate to either thing_0 says hello! or thing_1 says hello!
  • AS feedback – This means that, in our output message, the string we just constructed will be referred to as feedback.  Our full output message will be either { "feedback": "thing_0 says hello!" } or { "feedback": "thing_1 says hello!" }.
  • FROM 'input/#' – This means that we want this rule to receive messages on any topic under the topic input.  This would match input/thing_0input/thing_1, or even input/thing_0/one_more_level.  If we didn’t want to match all topics under input, we could change input/# to input/+.  That would only match input followed by one additional level in the topic hierarchy.  We wouldn’t process messages on the topic input/thing_0/one_more_level if we were to use input/+.

YAML AWS CloudFormation Template for SQL-Only Rule

AWSTemplateFormatVersion: 2010-09-09
Description: A SQL only IoT republish rule that responds to a device saying hello

Resources:
  SQLRepublishTopicRule:
    Type: AWS::IoT::TopicRule
    Properties:
      RuleName: SQLRepublish
      TopicRulePayload:
        RuleDisabled: false
        Sql: SELECT concat(topic(2), " says hello!") AS feedback FROM 'input/#'
        Actions:
          - Republish:
              Topic: output/${topic(2)}
              RoleArn: !GetAtt SQLRepublishRole.Arn
  SQLRepublishRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - sts:AssumeRole
            Principal:
              Service:
                - iot.amazonaws.com
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: iot:Publish
                Resource: !Join [ "", [ "arn:aws:iot:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":topic/output/*" ] ]

YAML AWS CloudFormation Template Explained

Let’s break this down into pieces.

SQLRepublishTopicRule Section

  SQLRepublishTopicRule:
    Type: AWS::IoT::TopicRule
    Properties:
      RuleName: SQLRepublish
      TopicRulePayload:
        RuleDisabled: false
        Sql: SELECT concat(topic(2), " says hello!") AS feedback FROM 'input/#'
        Actions:
          - Republish:
              Topic: output/${topic(2)}
              RoleArn: !GetAtt SQLRepublishRole.Arn

This section creates an AWS resource that is a topic rule (AWS::IoT::TopicRule) with several properties.

  • The first property is the rule name SQLRepublish. You’ll see this rule name in the AWS IoT console after this template has been launched.
  • The second property is the topic rule payload. The topic rule payload contains several attributes:
  • The first attribute indicates that the rule is enabled.  Rules can be disabled so they aren’t executed, but remain in AWS IoT console.  That way, you can easily enable them rather than creating them again.
  • The second attribute is the SQL statement we explained earlier.
  • The third attribute contains the actions performed when the rule’s criteria is met.  In this case, we want to republish to an output topic using an IAM role specified in the next section of the AWS CloudFormation template.  We specify the role using the role’s ARN attribute.  !GetAtt is the YAML syntax’s get attribute intrinsic function.  All functions in YAML templates are prefixed with an exclamation point !.

There are other intrinsic functions that are useful for managing your stack. They provide variables that are available during runtime only.

The topic here is output/${topic(2)}.  This syntax means that it will extract the second segment of the input topic (e.g. thing_0 from input/thing_0) and use it in that location.  If we received an input message from thing_0, our output topic would be output/thing_0.  This syntax allows the rule to dynamically publish to any of a number of output topics without a separate rule for each thing or each input topic.

SQLRepublishRole Section

  SQLRepublishRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - sts:AssumeRole
            Principal:
              Service:
                - iot.amazonaws.com
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: iot:Publish
                Resource: !Join [ "", [ "arn:aws:iot:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":topic/output/*" ] ]

This section creates an AWS resource that is an IAM role (AWS::IAM::Role) with several properties:

  • The first property is the assume role policy document.  This property allows AWS IoT to assume this role in your account so it can republish messages from the rules engine.
  • The second property is the list of policies associated with this role.

This role has one policy assigned to it called root, but the name is unimportant. It can be changed to something else, if you like.  The policy contains one statement that has three attributes:

  • The first attribute indicates that the effect of this statement is to allow access to a particular action.
  • The second attribute declares the action.  In this case, it is the iot:Publish action that lets an application publish an MQTT message.
  • The third attribute is a resource that we’ll break down in sections.

Here is the full resource statement:

Resource: !Join [ "", [ "arn:aws:iot:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":topic/output/*" ] ]

The !Join function joins together a list of strings. It includes a separator string between each pair of strings.  In our case, we don’t want to add strings, so we specified an empty string ("") as the separator.  This means the strings will be joined exactly as they are specified.

The next few statements build the ARN of the topic that we want to republish to.  AWS IoT topic ARNs look like this:

arn:aws:iot:<region>:<accountId>:topic/<topicName>

The beginning of the ARN is always arn:aws:iot: followed by the region, a colon, the current account ID, a colon, the string topic/, and the topic name.  The topic name can include wildcards because it is an IAM resource, but the use of the MQTT wildcards # and + are not allowed.  You can only use * and ?.

To make this code reusable across accounts and regions, we use !Ref "AWS::Region" and !Ref "AWS::AccountId" to fill in the region and account ID automatically.  The !Ref function tells AWS CloudFormation to handle this for us.

Deploying the SQL-Only Rule

Here are the steps for deploying the rule:

  1. Save the entire template to a file named sql-hello.yaml.
  2. Sign in to the AWS Management Console, and then open the AWS CloudFormation console.
  3. Choose Create Stack.
  4. Under Choose a template, click Choose file, select sql-hello.yaml, and then choose Next.
  5. For stack name, type SQLRule, and then choose Next.
  6. On the Options page, leave the fields at their defaults, and then choose Next.
  7. On the Review page, select I acknowledge that AWS CloudFormation might create IAM resources, and then choose Create.

The state displayed for your AWS CloudFormation stack should be CREATE_IN_PROGRESS.  You can periodically click the circular arrow in the upper-right corner to refresh the view.  When the stack has been created, the state displayed for your stack will be CREATE_COMPLETE.

Testing the SQL-Only Rule

We can now use the ELF client to test the rule.  If your client hasn’t been cleaned up since the last time you used it, you must first execute this command:

python elf.py clean

Now open two terminals.  One terminal will publish messages. The other will subscribe to the messages coming from our rule.

In the first terminal, execute these commands:

python elf.py create 2
python elf.py send --topic input --append-thing-name --duration 60

In the second terminal, execute this command:

python elf.py subscribe --topic output --append-thing-name --duration 60

In the first terminal, you should see messages like this:

INFO - ELF thing_0 posted a 24 bytes message: {"msg": "IoT ELF Hello"} on topic: input/thing_0
INFO - ELF thing_1 posted a 24 bytes message: {"msg": "IoT ELF Hello"} on topic: input/thing_1

In the second terminal, you should see messages like this:

INFO - Received message: {"feedback":"thing_0 says hello!"} from topic: output/thing_0
INFO - Received message: {"feedback":"thing_1 says hello!"} from topic: output/thing_1

If you see messages in both terminals, then everything is working.  Now you can go back to the AWS CloudFormation console, choose your SQLRule stack, and then choose Delete Stack. Wait until the stack has been deleted, and then try the commands again in the two terminals. You should see the same messages in the first terminal, but no messages in the second.

Creating a Rule with AWS CloudFormation to Route Messages to AWS Lambda

Now we’ll create a rule that routes MQTT messages to AWS Lambda.  AWS Lambda will receive the message and publish a new MQTT message based on what it receives from the rules engine.

YAML AWS CloudFormation Template for AWS Lambda Rule

AWSTemplateFormatVersion: 2010-09-09
Description: A simple IoT republish rule that responds to a device saying hello with AWS Lambda

Resources:
  LambdaRepublishTopicRule:
    Type: AWS::IoT::TopicRule
    Properties:
      RuleName: LambdaRepublish
      TopicRulePayload:
        RuleDisabled: false
        Sql: SELECT topic(2) AS thing_name, msg FROM 'input/#'
        Actions:
          - Lambda:
              FunctionArn: !GetAtt LambdaHelloFunction.Arn
  LambdaRepublishRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action: sts:AssumeRole
            Principal:
              Service:
                - lambda.amazonaws.com
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: logs:*
                Resource: arn:aws:logs:*:*:*
              - Effect: Allow
                Action: iot:Publish
                Resource: !Join [ "", [ "arn:aws:iot:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":topic/output/*" ] ]
  LambdaHelloFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: LambdaHello
      Role: !GetAtt LambdaRepublishRole.Arn
      Timeout: 5
      Handler: index.lambda_handler
      Runtime: python2.7
      MemorySize: 512
      Code:
        ZipFile: |
                  import boto3
                  import json

                  def lambda_handler(event, context):
                      client = boto3.client('iot-data')
                      thing_name = event['thing_name']
                      payload = {}
                      payload['feedback'] = thing_name + " said hello to Lambda!"
                      payload = bytearray(json.dumps(payload))
                      response = client.publish(topic='output/' + thing_name, qos=0, payload=payload)
  LambdaInvocationPermission:
    Type: AWS::Lambda::Permission
    Properties:
      SourceArn: !Join [ "", [ "arn:aws:iot:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":rule/", !Ref "LambdaRepublishTopicRule" ] ]
      Action: lambda:InvokeFunction
      Principal: iot.amazonaws.com
      FunctionName: !GetAtt LambdaHelloFunction.Arn
      SourceAccount: !Ref AWS::AccountId

YAML AWS CloudFormation Template Explained

Although this template is similar to the SQL-only template, there are some important differences.

The first is the SQL statement, which looks like this:

SELECT topic(2) AS thing_name, msg FROM 'input/#'

We’re using SQL to extract the topic(2) value to a field called thing_name and we’re passing through the msg field.  This creates a JSON message with two fields that will be sent to our Lambda function.  When we were using the republishing feature of the rules engine, we could access this value directly and use it to specify our output topic.  When Lambda receives the JSON message from AWS IoT, the information it needs for its processing must be included in the message.

LambdaRepublishRole Section

  LambdaRepublishRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action: sts:AssumeRole
            Principal:
              Service:
                - lambda.amazonaws.com
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: logs:*
                Resource: arn:aws:logs:*:*:*
              - Effect: Allow
                Action: iot:Publish
                Resource: !Join [ "", [ "arn:aws:iot:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":topic/output/*" ] ]

This section creates a role that Lambda can assume. It allows Lambda to perform the following actions in our account:

  • Write to Amazon CloudWatch Logs with logs:* and arn:aws:logs:*:*:*
  • Publish to the output topic hierarchy as we did in the other template

AWS IoT no longer needs publish permission because Lambda will handle publishing the messages for us.

LambdaHelloFunction Section

  LambdaHelloFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: LambdaHello
      Role: !GetAtt LambdaRepublishRole.Arn
      Timeout: 5
      Handler: index.lambda_handler
      Runtime: python2.7
      MemorySize: 512
      Code:
        ZipFile: |
                  import boto3
                  import json

                  def lambda_handler(event, context):
                      client = boto3.client('iot-data')
                      thing_name = event['thing_name']
                      payload = {}
                      payload['feedback'] = thing_name + " said hello to Lambda!"
                      payload = bytearray(json.dumps(payload))
                      response = client.publish(topic='output/' + thing_name, qos=0, payload=payload)

This section defines our Lambda function:  It’s written in Python, gets 512 MB of RAM, has a five-second timeout, uses the role we just defined to publish messages in AWS IoT, and its code is specified inline.

The code creates an IoT data client with Boto 3, extracts the thing name, builds a payload in which the feedback field is populated with our message, converts the payload dictionary to JSON, converts the JSON to a byte array, and then publishes it to the correct output topic by appending the thing name to output/. The IoTDataPlane publish function in Boto 3 requires that the data passed to it is either a byte array or a reference to a file.

LambdaInvocationPermission Section

  LambdaInvocationPermission:
    Type: AWS::Lambda::Permission
    Properties:
      SourceArn: !Join [ "", [ "arn:aws:iot:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":rule/", !Ref "LambdaRepublishTopicRule" ] ]
      Action: lambda:InvokeFunction
      Principal: iot.amazonaws.com
      FunctionName: !GetAtt LambdaHelloFunction.Arn
      SourceAccount: !Ref AWS::AccountId

This final section is different from the SQL republish template.  This permission allows the Lambda function to be invoked by the AWS IoT rule.  Without this permission, even if the rule is executed, Lambda will not allow AWS IoT to run the code.

Deploying and Testing the AWS Lambda Rule

Save this template as lambda-hello.yaml, launch it the same way we launched the last template but use the name LambdaRule, and then run the ELF commands in the two terminals again.  You’ll see output like this in the second terminal:

INFO - Received message: {"feedback": "thing_0 said hello to Lambda!"} from topic: output/thing_0
INFO - Received message: {"feedback": "thing_1 said hello to Lambda!"} from topic: output/thing_1

What’s Next?

You now have two templates that you can use as a starting point to develop rules that republish messages. You created a rule using the SQL rules engine in AWS IoT that invokes Lambda functions written inline in Python.  As you build new templates with new rules, you can use AWS CloudFormation to make sure they’re set up consistently, repeatably, and easily.  What will you connect AWS IoT to next?

Samsung Selects AWS IoT for Cloud Print with Help from ClearScale

Background

ClearScale was founded in 2011, with a focus on delivering best-of-breed cloud systems integration, application development, and managed services. We are an AWS Premier Consulting Partner with competencies in Migration, DevOps, Marketing & Commerce, Big Data, and Mobile. Our experienced engineers, architects, and developers are all certified or accredited by AWS.

We have a long track record of successful IoT projects and the proven ability to design and automate IoT platforms, build IoT applications and create infrastructure on which connected devices can easily and securely interact with each other, gather and analyze data and provide valuable insights to your business and customers.

Our firm is unique in that we offer a breadth of technical experience coupled with a holistic organizational view. This allows us to truly partner with our customers and translate complex business requirements into solid, scalable, cloud-optimized solutions. At ClearScale, we understand best practices for driving the maximum business value from cloud deployments.

Samsung partnered with our firm to launch a Cloud Solutions Platform for delivering robust infrastructure and printing solutions at cloud-scale for any device from any location. In order to architect the device management component of the platform, we conducted a competitive analysis between the AWS IoT and the incumbent solution based on Ejabberd messaging platform.

With the goal of this effort focused around delivering to Samsung a methodology that would allow them to get the most reliable printing services for their customer base, the analysis needed to leverage a key item; the device management component. This component handles the authentication and messaging between devices, in this case printers, and the Cloud infrastructure. In addition, it allows for collecting instrumentation data from the devices for later analysis which in turn would allow Samsung to understand the health and utilization of each device to identify issues that required remote troubleshooting and subsequent proactive maintenance.

High Level Application Overview: 

Defining the Test Rules

Working with Samsung, we defined a set of criteria for evaluating AWS IoT versus Ejabberd for their device management capability. The attributes were prioritized and weighted based on Samsung’s business requirements. While these key areas are applicable to any IoT evaluation the subsequent scoring methodology may differ somewhat depending on the client’s specific use case(s) and requirements.

The analysis needed to address two major areas: functional testing and load testing. For the functional testing, we wanted to compare the Eiabberds’ solution to AWS IoT evaluating each solution’s core capabilities, security posturing and the ubiquity of its technology. For the load testing, we needed to understand the availability, scalability, maintainability, performance and reliability of each solution so that the metrics gathered around each area of concern could be applied to a scoring matrix as shown below.

* A score was awarded for each quality attribute, with a total score being the sum of all scores for the quality attributes. The maximum total score for a solution was deemed to be 100.

Functional Testing

Functional testing was performed first, with the goal of ensuring each system could fulfill the defined functional requirements, and only after which were the more expensive “load testing” performed. We deployed a small environment for Ejabberd and configured the AWS IoT service, so that they were functionally identical. Five functional tests were performed to validate the solutions and both solutions satisfied Samsung’s requirements without any issues.

Load Testing

Defining the Scenarios

Before comparing Ejabberd and AWS IoT, we needed to design the load testing criteria by opting to run two distinct scenarios.

  1. Simulate peak load conditions
  2. Demonstrate system stability

The message rates were calculated from the following profile:

  • Consumer (2-3 jobs per week)
  • SMB (10-20 jobs per week)
  • Enterprise (150-300 jobs per week)
  • Proposed distribution: 50%, 30%, 20%
  • Total number of agents: 500,000

AVERAGE NUMBER OF MESSAGES PER SECOND

AvgMsgs = MsgsPerJob * NumOfAgents * JobsPerWeek / SecondsPerWeek

= 2 * 500,000 * 300 / (7 * 24 * 60 * 60)
= 496.032

Where:

  • MsgsPerJob = Number of messages resulting from each job (2; see note)
  • AvgJobs = Average number of jobs per second
  • NumOfAgents = Total number of agents (500,000)
  • JobsPerWeek = Number of jobs a week per one agent
  • SecondsPerWeek = Number of seconds in a week (7 * 24 * 60 * 60)

Note: Results are doubled due to SCP behavior. For each job, XoaCommMtgSrv sends a PING message to an Agent. After the Agent executes the job, XoaCommMtgSrv sends another PING message to XCSP Service.

MAXIMUM NUMBER OF MESSAGES PER SECOND

  • Number of jobs executed during busy hours: 90%
  • Number of busy hours per week: 10 (2 hours per day; 5 days per week)

MaxMsgs = MsgsPerJob * BusyHourJobs * NumOfAgents * JobsPerWeek / BusyHours

= 2 * 0.9 * 500,000 * 240 / 36,000
= 6,000

Where:

  • MsgsPerJob = Number of messages resulting from each job (2; see note)
  • BusyHourJobs = Percentage of jobs expected to be executed during busy hours (90% = 0.9)
  • NumOfAgents = Total Number of agents (500,000)
  • JobsPerWeek = Number of jobs a week per one agent
  • BusyHours = Number of seconds in busy hours a week (2 * 5 * 3600)

Load Generation

We selected Apache JMeter as our load generation engine. It is an extensible solution with which customized tests are easy to develop. The product is widely used and has strong community support.

“The Apache JMeter™ application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. Apache JMeter may be used to test performance on static/dynamic resources and dynamic web applications. It can be used to simulate a heavy load on a server, group of servers, network or object to test its strength or to analyze overall performance under different load types.”

Ejabberd and AWS IoT utilize different protocols, so we developed custom plugins for Apache JMeter (XMPP and MQTT, respectively). The plugins allowed us to create custom logging for deeper analysis and address connection persistence and manage secure connections. Our goal was to have the load generation closely emulate the actual system functionality including connection security and persistence. This included requests/messages from devices (Agents) as well as requests/responses from the Samsung’s device management application (XoaCommMtgSrv).

By using an existing tool and extending its functionality, we reduced the overall time needed to develop the load generation code. The following custom JMeter plugins were created to provide capabilities required by the test methodology:

  • MQTT protocol plugin for JMeter – used for AWS IoT testing
  • XMPP protocol plugin for JMeter – used for Ejabberd testing

There are several reasons to use custom plugins:

  • The test model can more closely emulate the actual system
  • Emulate a few number of XoaCommMtgSrv servers and a huge number of Agents
  • Support persistent connections – not supported by existing plugins
  • Support secure connections – not supported by existing plugins

Custom logging

  • Distinguish XoaCommMtgSrv server actions from Agent actions
  • Associate specific JMeter engine node to the XoaCommMtgSrv/Agent a log messages
  • Capture job execution sequences and identify out-of-order job processing
  • Enable low level debugging

The JMeter test plans for each solution have the same high-level behavior:

While testing the JMeter MQTT plugin, we determined that a single JMeter engine node was capable of emulating 8,000 agents without a performance bottleneck. In order to emulate 500,000 agents, as called for by the test methodology, we used 64 JMeter engine nodes for AWS IoT load generation.

While testing the JMeter XMPP plugin, it was discovered that a single JMeter engine node was capable of simulating 6,500 agents without a performance bottleneck. In order to emulate 500,000 agents as called for by the Test Methodology, we used 80 JMeter engine nodes for Ejaberd load generation. This was an important step to ensure that the metrics were not skewed by limitations on the load generation side of the equation.

We deployed the JMeter management node and engine nodes on C4.xlarge EC2 instances. The JMeter cluster was deployed within a single Availability Zone (AZ) for simplicity.

Test Execution

Preparing to load test AWS IoT (MQTT message broker) was a straight forward process. We configured the service and AWS handled all of the resources and scaling behind the scenes. To properly simulate unique devices, we generated 512,000 client certificates and policy rules. These certificates and policies were required for clients to authenticate to the MQTT message broker provided by AWS IoT.

Preparing the Ejabberd environment took a bit more effort; we needed to conduct single node load tests to identify suitable instance sizes and maximum capacity of each node. They elected to run the full load tests against two instance types and deployed two Ejabberd clusters (attached to MySQL on EC2) using c4.2xlarge instances with 9 nodes and c4.4xlarge instances with 4 nodes. In order to replicate real-world scenarios, we provisioned an extra node per cluster for HA purposes.

For Stability and Busy Hours testing, the following configurations were used:

  • c4.2xlarge with 9 nodes
  • c4.4xlarge with 4 nodes

Table: Ejabberd Single Node Limits


The common bottleneck for both instance types is “Auth Rate”. To be able to support 1,500 auth/sec it’s needed to have 3 c4.2xlarge instances. Because of High Availability requirement, we added 1 extra instance for a total of 4 nodes in the cluster. We used the same formula to calculate the 9 node cluster of c4.2xlarge instances.

We ran two iterations of the Peak test scenario and two iterations of the Stability test scenario in order to compare results. They cleared the JMeter engines of previous test data and temporary files and restarted the instances to ensure the load generation platform was clean and would provide accurate and reliable results from one test run to the next without having the results skewed by previous test result data.

Test Results

AWS IOT

General Information

Both test cases for AWS IoT were passed. The number of errors was less than 0.01%.

Table: AWS IoT Load Test Results

“Error Distribution” diagrams show cumulative number of errors that happen during time. The relationship is almost linear.

Stability Load Testing

Table: Stability Testing – Summary

Diagram: Stability Testing – Message Latency Histogram

Notes:

Histograms for all tests represent a distribution of message latency (the amount of time needed to send a message from a publisher to a subscriber). The values will differ from real values because testing environment is located in the same Region as tested services. But in a real life scenarios, agents will be global so Internet related delays will apply.

The purpose of the histograms presented in this document is to show if there are any delays related to buffering or overload (service degradation)

Diagram: Stability Testing – Error Distribution (Cumulative)

Busy Hour Load Testing

Table: Busy Hour Load Testing – Summary

Diagram: Busy Hour Load Testing – Message Latency Histogram

Diagram: Busy Hour Load Testing – Error Distribution (Cumulative)

Notes:

During the first test 1712 threads lost their connection (16-37 threads on each engine node) between 22:39:17 – 22:41:52 UTC. Threads were reconnected to different AWS IoT endpoint IP’s.

All threads reconnected successfully, but only after the message receive timeout. In this case AWS IoT was dropping messages because there were no agents subscribed to topics, and this can’t be considered as an AWS IoT error.

It was decided to normalize the first diagram by removing the data for that time period.­­

EJABBERD
General Information
Stability and busy hour load test cases were for AWS Ejabberd both passed. The number of errors is less than 0.01%.

Stability Load Testing
Test case was executed twice for each instance size, and was passed without errors.

Table: Stability Testing – Summary

Notes:

  • All tests were finished successfully
  • Test #1 for c4.4xlarge was stopped because of the overtime. One message was not received due test stop

Diagram: Stability Testing – Message Latency Histogram (c4.2xlarge)

Diagram: Stability Testing – Message Latency Histogram (c4.4xlarge)

Diagram: Stability Testing – Error Distribution (c4.2xlarge)

Diagram: Stability Testing – Error Distribution (c4.4xlarge)

Busy Hour Load Testing

Table: Stability Testing – Summary

Diagram: Busy Hour Load Testing – Message Latency (c4.2xlarge)

Diagram: Busy Hour Load Testing – Message Latency (c4.4xlarge)

Diagram: Busy Hour Load Testing – Error Distribution (c4.2xlarge)

Diagram: Busy Hour Load Testing – Error Distribution (c4.4xlarge)

Comparing Results
At the conclusion of the load testing we found the following:

The analysis showed that both solutions could provide very comparable services for the load profile and use cases.

Cost Analysis

We conducted a cost comparison based on capital expenses (CAPEX) and operational expenses (OPEX). For this particular analysis, they defined CAPEX as the cost of development and deployment of the given solution. OPEX was defined as monthly/yearly infrastructure and maintenance costs. For ease of calculations, they did not include human resource and common organizational expenses for this exercise.

CAPEX costs are based on actual work, performed by ClearScale, for other clients to develop and deploy similar solutions.

Upon further review it was apparent that the AWS IoT solution was extremely cost effective from a capital expenditure perspective. The huge difference in CAPEX costs also indicated that AWS IoT would take less time to deploy.

Conclusion

The AWS IoT solution scored higher in Availability, Maintainability and Cost. Ejabberd did score higher on Message Reliability which carried the lowest weight and priority on our scoring matrix based on the criteria and requirements provided by Samsung.

Table: AWS IoT Results Summary Table

Table: Ejabberd Results Summary Table

Samsung had two main objectives they were attempting to answer with this analysis:

  • “How does this affect our customers?” AWS IoT provides the availability, consistency, and security that deliver the best possible service. This enables Samsung to keep printers online and operational so that their customers can experience uninterrupted printing services.
  • “How does this affect our innovation?” (We can define innovation as the time a developer spends on creating new services) As we can see from the level of effort required to setup our testing environments, the AWS IoT solution is much easier to deploy than the Ejabberd clusters. We did not have any overhead for performance tuning or system scaling. The best part of AWS IoT is that there is zero maintenance effort moving forward. The time and money saved can be redirected to creating new products and features for customers.

We were able to demonstrate to Samsung that the better solution was AWS IoT. By reviewing the test results and comprehensive cost analysis, they were able to provide a solution to Samsung that would meet the requirements that were set forth, provide a solution that would scalable and maintainable, and deliver an improved customer experience by leveraging new and innovative technologies.

Learn more about ClearScale IoT

How to route messages from devices to Salesforce IoT Cloud

AWS IoT customers can now route messages from devices directly to Salesforce IoT Cloud with a new AWS IoT Rules Engine Action that requires only configuration.

As part of the strategic relationship between AWS and Salesforce, the combination of AWS IoT and Salesforce IoT Cloud allows you to enrich the messages coming from your devices with customer data residing in Salesforce. This results in deeper insights and allow customers to act on those newly created insights within the Salesforce ecosystem.

In this article, we are going to walk you through a step by step example so you can learn how to configure and test this new action type.

Bring case management to your connected devices

We are going to take an industrial solar farm as an example, which is inspired from a demonstration that took place at Re:Invent 2016.

This demonstration showcases AWS IoT-connected products reporting a critical failure. As a result, a new record in the case management system gets created in Salesforce Service Cloud which instructs a technician to go on-site, assess the situation and make repairs.

To learn more about it, visit the AWS YouTube channel.

Create an AWS IoT Rule with a Salesforce action type

Start by logging into the AWS IoT console.

Click on the Rules section and select Create a rule.

 

Name your rule solarPanelTelemetry and then enter a meaningful description.

We will create a simple rule to forward all the data coming from a solar panel to Salesforce IoT Cloud. Enter * as the Attribute of the rule to allow all data coming from the device to be passed on. Enter solarPanels/D123456 as the topic filter and leave the condition field blank.

Once you’re done click on Add action.

 

Select the Salesforce action type and click on Configure action.

Go to the Salesforce IoT Cloud console and copy/paste the value displayed on the Input Stream for the URL and the Token. To learn more about Input Streams please refer to the Salesforce documentation.

 

Click on Add action and review the AWS IoT Rule. You should see the Salesforce action you just added. Click on Create rule.

 

Test your configuration

We are going to test the AWS IoT Rule we just created by simulating a message coming from a solar panel. Go to the Test section of the AWS IoT Console.

Enter solarPanels/D123456 as the Subscription topic and push the Subscribe to topic button. This will enable you to verify that the sample message you are sending is published to the topic matching the rules’ configuration.

Next enter solarPanels/D123456 for the topic name in the Publish section and copy/paste the following JSON:

{
  "deviceId": "D123456",
  "volts": 70,
  "amps": 1.5,
  "watts": 90,
  "latitude": "45.0000",
  "longitude": "-122.0000",
  "timestamp": "1493750762445"
}

Finally, push the Publish to topic button to send the message.

If you want to monitor the rule’s execution, you can set up Cloudwatch Logs for AWS IoT.

 

Log into the Salesforce IoT Cloud console to see the message that was sent from AWS IoT.

 

Next steps

Refer to the AWS IoT developer documentation for more information on how to use this new action.

Or, sign into the AWS IoT console to try it.

To learn more about AWS IoT, visit the AWS website. To learn more about Salesforce IoT Cloud, visit the Salesforce website.

Understanding the AWS IoT Security Model

According to Gartner, the Internet of Things (IoT) has enormous potential for data generation across the roughly 21 billion endpoints expected to be in use in 2020(1). Internet of Things (IoT) devices in use. Its easy to get excited about this rapid growth and envisage a future where the digital world extends further into our world. Before you take the decision to deploy devices into the wild, it’s vital to understand how you will maintain your security perimeter.

In this post, I will walk you through the security model used by AWS IoT. I will show you how devices can authenticate to the AWS IoT platform and how they are authorized to carry out actions.

To do this, imagine that you are the forward-thinking owner of a Pizza Restaurant. A few years ago, most of your customers would have picked up the phone and actually spoken to you when ordering a pizza. Then it all moved on-line. You now want to give your customers a new experience, similar to the Amazon Dash Button. One press of an order button and you will deliver a pizza to your customer.

The starting point for your solution will be the AWS IoT Button. This is a programmable button based on Amazon Dash Button hardware. If you choose to use an AWS IoT Button, the easiest way to get up and running is to follow one of the Quickstart guides. Alternatively, you can use the Getting Started with AWS IoT section of the AWS Documentation.

Who’s Calling?

When someone presses an AWS IoT Button to order a pizza, it’s important to know who they are. This is obviously important as you will need to know where to deliver their pizza, but you also only want genuine customers to order. In much the same way as existing, on-line customers identify themselves with a username, each AWS IoT Button needs an identity. In AWS IoT, for devices using MQTT to communicate, this is done with an X.509 certificate.

Before I explain how a device uses an X.509 certificate for identity, it is important to understand public key cryptography, sometimes called asymmetric cryptography (feel free to skip to the next section if you are already familiar with this). Public key cryptography uses a pair of keys to enable messages to be securely transferred. A message can be encrypted using a public key and the only way to decrypt it is to use the corresponding private key:

A key pair is a great way for others to send you secret data: if you keep your private key secure, anyone with access to the public key can send you an encrypted message that only you can decrypt and read.

In addition, public and private keys also allow you to sign documents. Here, a private key is used to add a digital signature to a message. Anyone with the public key can check the signature and know the original message hasn’t been altered:

In addition to demonstrating a message hasn’t been tampered, a digital signature can be used to prove ownership of a private key. Anyone with the public key can verify a signature and be confident that when the message was signed the signer was in possession of the private key.

Create an Identity

An X.509 certificate is a document that is used to prove ownership of a public key. To make a new X.509 certificate you need to create a Certificate Signing Request (CSR) and give it to a Certificate Authority (CA). The CSR is a digital document that contains your public key and other identifying information. When you send a CSR to a CA it first validates that the identifying information you’ve supplied is correct, for example you may be asked to prove ownership of a domain by responding to an email. Once your identity has been verified, the CA creates a certificate and signs it with a private key. Anyone can now validate your certificate by checking its digital signature with the CA’s public key.

At this point you may be wondering why you should trust the CA and how you know the public key it gave you is genuine. The CA makes it easy to prove the ownership of its public key by publishing it in an X.509 certificate. The CA’s certificate is itself signed by another CA. This sets up a chain of trust where one CA vouches for another. This chain goes back until a self-signed root certificate is reached.

There are a small number of well-known root certificates. For example you can find lists of certificates that are installed in MacOS Sierra or available to Windows computers as part of the Microsoft Trusted Root Certification Program (free TechNet account needed to view). The chain of trust allows anyone to check the authenticity of any certificate by examining it all the way to a well-known, trusted root certificate:

Since each of your pizza order buttons will need a separate identity, you will need an X.509 certificate for each device. The diagram below shows how a new X.509 certificate is made for a device by AWS IoT. When creating a new certificate, you have three choices. The easiest (option 1 below) is to use the one-click generation. Here, AWS will create a public and private key and follow the process through to create a new certificate signed by the AWS IoT CA. The second option is to provide your own CSR. This has the advantage that you never give AWS sight of your private key. As with option 1, the new certificate generated from the CSR is signed by the AWS IoT CA. The final option is to bring your own certificate signed by your own trusted CA. This choice is best if you already generate your own certificates as part of your device manufacture or you already have a large number of devices in the field. You can find out more about using your own certificates in this blog post.

At the end of this you should be in possession of both the new device certificate and its private key. Whether you need to download these from AWS depends whether you choose option 1 (you need to download the certificate and the private key), option 2 (you just need to download the certificate) or option 3 (you already have both the certificate and the key, so don’t need to download anything).

At this point, you also need to get a copy of the root certificate used by the AWS IoT server. As you will see below, this is important when establishing an authenticated link with the AWS IoT service.

All three files (the private key, the device certificate and the AWS IoT server certificate) need to be put onto your pizza ordering button. Note that if you are using an AWS IoT Button, you don’t need to put the root certificate onto the device explicitly because it was put onto the device for you when it was manufactured.

Authenticating to AWS IoT

Now that the certificates and private key are on our AWS IoT Button, you are ready to establish a connection to AWS IoT and authenticate. The protocol used is Transport Layer Security (TLS) 1.2, which is the successor to Secure Sockets Layer (SSL). This is the same protocol that you use to securely shop or bank on the internet but, in addition to server authentication, the client also uses a X.509 certificate prove its identity.

The connection starts with the AWS IoT Button contacting the Authentication and Authorization component of AWS IoT with a hello message:

The hello message is the start of a TLS handshake, which will establish a secure communication channel between the AWS IoT Button and AWS IoT. During the handshake, the client and server will agree on a shared secret, rather like a password, which will be used to encrypt all messages. A shared secret is preferred over using asymmetric keys as it less expensive in terms of computing power needed to encrypt messages, so you can get better communication throughput. The hello message contains details of the various cryptographic methods that the AWS IoT Button is able to use.

When the server receives a hello message it picks the cryptographic method it wants to use to establish the shared secret and returns this, together with its server certificate, to the AWS IoT Button:

Now that the AWS IoT Button has a copy of the server certificate it can check that it is really talking to AWS IoT. It does that by using the AWS IoT Service root certificate, that you downloaded and put on the device. The public key that’s embedded in the root certificate is used to validate the digital signature on the server certificate:

If the digital signature checks out with the root certificate’s public key then the AWS IoT Button trusts that it has made contact with the AWS IoT service. It now needs to do two things; first it needs to authenticate itself with AWS IoT and second it needs to establish a shared secret for future communication.

To authenticate itself with AWS IoT, the AWS IoT Button first sends a copy of its device certificate to the server:

To complete the authentication process, the AWS IoT Button calculates a hash over all the communication records that are part of the current session with the AWS IoT Server. It then calculates a digital signature for this hash using its private key:

The digital signature is then sent to AWS IoT.

AWS IoT is now in possession of the devices’ public key (which was in the device certificate) and the digital signature. Whilst the TLS handshake has been proceeding, the AWS IoT Service has also been keeping a record of all communication and calculates the same hash as the AWS IoT Button. It uses the device’s public key to check the accuracy of the digital signature:

If the signature checks out, AWS IoT can be confident that it is talking to a pizza ordering device belonging to one of your customers. By using the unique identifier of the certificate, it knows exactly which device is establishing a MQTT session.

The exact method by which a shared secret is established depends on the key exchange algorithm that the server and client agreed on at the beginning of the handshake. However, the process is started by the AWS IoT Button encrypting a message using the server’s public key (which it got from the server’s certificate). The message might be a pre-master-secret, a public key or nothing. This is sent to the server and can be decrypted using the server’s private key. Both the server and the AWS IoT Button then use the contents of the message to establish a shared secret without needing further communication. From then on, all messages between the device and AWS IoT are secured using the shared secret.

Permission to Order

The pizza order button has used its X.509 certificate to prove its identity and secure the messages it exchanges with AWS IoT. It is now ready to order pizza. Each AWS IoT Button publishes MQTT messages to its own topic, for example:

iotbutton/G03XXXXXXXXXXXXX

The second part is the serial number of the AWS IoT Button. It’s important that your system implements least privilege security and only permits an AWS IoT Button to publish to its own topic. For example, a nefarious customer could re-program their button to publish to a neighbor’s topic. When the pizza turns up, it’s simple social engineering to intercept the delivery and claim a free meal.

As you’ve seen, a device certificate is similar to a user’s username; it’s their identity. To give this identity permissions, you need to attach a policy to the certificate, in much the same way as you would attach permissions or policies to an IAM user.

The default policy for an AWS IoT Button is shown below. The default policy grants the owner of the certificate rights to publish to the topic specified in the ‘Resource’ attribute.

In this policy, the serial number is hard-coded. This solution will not scale well as you will need a separate policy for each AWS IoT Button.

Fortunately, the policy language can help us with variable substitutions. For example, the following policy can be applied to all our devices. Instead of hard coding the serial number, the AWS IoT Service obtains it from the certificate that was used to authenticate the device. This assumes that when you created the certificate, the serial number was part of the identifying information.

You can check out the documentation for further information on AWS IoT Policies and the substitution variables that you can use.

Summary

In this post, I have introduced you to the AWS IoT Security model and showed you how devices are authenticated against the service and how devices can be authorised to carry out actions.

You can purchase your own AWS IoT Button here or, if you plan a more sophisticated solution, you may want to check out this page that has lots of idea for getting started on AWS IoT, including some starter kits.

If you have any questions or suggestions, please leave a comment below.

 

(1) Gartner, Press Release, Gartner Reveals Top Predictions for IT Organizations and Users in 2017 and Beyond, October 2016,  http://www.gartner.com/newsroom/id/3482117

 

How AI Will Accelerate IoT Solutions for Enterprise

Artificial intelligence (AI) is going mainstream, which has long-lasting implications for how enterprises build and improve their products and services.

As announced at re:Invent 2016, AWS is simplifying artificial intelligence (AI) adoption through three low-cost, cloud-based services built for AI-specific uses cases. Instead of creating proprietary algorithms, data models or machine learning techniques, all levels of developers from Global 2000 enterprises through start-ups can leverage the Amazon Lex, Amazon Rekognition and Amazon Polly APIs to innovate quickly and build new Internet of Things (IoT) product and service offerings. Accenture, is delivering these innovative offerings by supporting clients with vertical industry applications powered by Amazon AI.

Combining AI with IoT is essential because it enables businesses to collect data in the physical world–from wearables, appliances, automobiles, mobile phones, sensors and other devices—and add intelligence to deliver a better response or outcome. In other words, AI is the automation brainpower to make IoT device-driven data more useful.

For example, a telecommunications company could create an AI-powered mobile chat bot to automate customer service processes. One use case would be to monitor incoming IoT data from cable boxes installed in homes. If a device started to malfunction, the mobile chat bot could notify the customer via text or voice interaction of a possible service issue, and offer the convenience of scheduling a service technician. This device-driven data could leverage AWS Lambda for serverless functions, as well as AWS Greengrass for embedded software on the edge. Thereby, leveraging the use of AWS cloud as needed when processing, storing and computing.

API functionality overview and real-world uses

Used separately or in combination, developers can embed the APIs into existing smart product and service roadmaps, or inject them into cloud-native programming processes.

  • Amazon Lex—AI-driven processing engine that computes voice input or sensor data to better understand and personalize an experience or outcome (part of Alexa voice platform)
  • Amazon Rekognition—Image and facial analysis to detect and understand environment and what is happening in real-life scenario or picture
  • Amazon Polly—Text-to-speech service that synthesizes structured text data into natural voice-like capability (in male or female voice and in 24 languages) to enrich response.

Today, businesses typically run analytics in the cloud on transactional datasets, such as customer purchases or location-based information. But, IoT data combined with AI provides a deeper level of insights. By collecting real-time data from IoT devices (or what is known as device-driven data), a business can use an AI engine to automate the information processing and connect different sources of unstructured/structured data to contextualize what a person is asking for. From this understanding, the machine can provide a personalized response or experience directly to the end user, or route the response back into the enterprise to automate another process.

This capability opens an entirely new set of IoT-based product or service offerings. Accenture recently released their Technology Vision 2017, which explains the benefits of AI for the Enterprise. For instance, a healthcare business could implement Amazon Lex and Amazon Rekognition to improve the process of monitoring house-bound or elderly patients who need assisted living. In one use case, the service could install a video camera to take pictures of an individual, analyze the images in the cloud to keep track of movements, and send an automatic alert to a healthcare giver or family member if the patient has not moved in a specified amount of time or has fallen.

Expanding AI and IoT opportunities

In the future, AI combined with IoT will introduce even more scenarios in which robots (aka automated machines) collaborate with people to supply intelligent information and augment human interaction. This will help people to complete tasks more efficiently, interact in a more personalized way or supply on-demand services.

In a retail setting, for example, a business could create a collaborative artificial intelligence (“cobot”) application using Amazon Lex and Amazon Rekognition that analyzes facial features of in-store shoppers in real-time and combines this information with purchase transaction history. The cobot could then prompt sales associates to offer customized help to each customer as they choose items. Or in a hospital situation, Amazon Lex and Amazon Rekognition could be built into an application that uses AI and cloud-based big data, all connected with IoT, to help physicians better diagnose their patients. Examples include detecting skin anomalies with image analysis or stress-related symptoms.

AWS’s new AI-driven APIs, developing IoT products and services with AI capabilities is becoming cost effective and accessible for all businesses and leveraging Accenture to deliver new, applied, solutions give Enterprises a quick way to adopt at scale.

 

Connect your devices to AWS IoT using the Sigfox network

Connectivity is a key element to evaluate when designing IoT systems as it will weigh heavily on the performance, capabilities, autonomy of battery powered objects, and cost of the overall solution. There is no one network that fits all scenarios which is why AWS partners with many different network providers. By partnering, you can then choose the most relevant network to satisfy your business requirements. In this blog post, we’ll explore providing LPWAN connectivity to your objects using the Sigfox network. Pierre Coquentin (Sigfox – Software Architect) will explain what Sigfox is and how to connect objects while Jean-Paul Huon (Z#bre – CTO) will share his experience using Sigfox with AWS in production.

Why Sigfox?

Sigfox provides global, simple, cost-effective, and energy-efficient solutions to power the Internet of Things (IoT). Today, Sigfox’s worldwide network and broad ecosystem of partners are already enabling companies to accelerate digital transformation and to develop new services and value.

In order to connect devices to its global network, Sigfox uses an ultra-narrow-band (UNB) radio technology. The technology is key to providing a scalable, high-capacity network with very low energy consumption, while maintaining a light and easy-to-rollout infrastructure. The company operates in the ISM bands (license-free frequency bands), on the 902MHz band in the U.S., as well as the 868MHz band in Europe.

Once devices are connected to the Sigfox network, data can be transmitted to AWS IoT, enabling customers to create IoT applications that deliver insight into and the ability to act upon their data in real-time.

Please find more information at https://www.sigfox.com/

Send data from Sigfox to AWS IoT

We’ll start from the assumption that you already have objects connected and sending data to the Sigfox network. All that is left to do, is to configure the native AWS IoT connector to push your data to the AWS Cloud. To make things a bit more interesting, we will store all the data sent by your devices in an Amazon DynamoDB table.

Fig1

In order to implement this architecture, we are going to perform the following steps:

  • Configure the AWS IoT Connector in the Sigfox Console
  • Provision the necessary resources on AWS so Sigfox can send data into your AWS account securely through the AWS IoT connector using a CloudFormation script that will generate IAM roles and permissions.
  • Manually create a rule in AWS IoT and a DynamoDB table so we can store the data coming from Sigfox into the DynamoDB table

In our example, we are using the US East 1 region. We recommend you go through this tutorial once by using the exact same configuration. Once you gain knowledge on how to configure the different pieces, then customize the implementation to fit your needs.

First, log into the Sigfox console, go to the “Callbacks” section and click on the “New” button to create a new “Callback”.

Fig2

Now select the “AWS IoT” option as the type of “Callback”.

Fig3

Please copy the “External Id” given to you in your clipboard, it will be useful later. The “External Id” is unique to your account and enables greater security when authorizing third party to access your AWS resources, you can find more information here.

Next click on “Launch Stack” and leave the “CROSS_ACCOUNT” option selected.

Fig4

This will redirect you to the AWS CloudFormation console, click “Next” on the first screen.

Fig5

On the following screen, enter the following inputs:

  • Stack name: Choose a meaningful name for the connector.
  • AWSAccountId: Input your AWS Account Id, you can find it here.
  • External Id: Copy/paste the external Id given to you in the Sigfox console.
  • Region: Choose the region where AWS IoT will be used.
  • Topic Name: Choose the topic name you wish to send data to.

Click “Next” once you are ready.

Fig6

The next screen is optional, if you wish you can customize options (Tags, Permissions, Notifications) otherwise click on “Next” to continue with the default options. You should now be on the review screen, check the “I acknowledge that AWS CloudFormation might create IAM resources”  box and click on “Create” to launch the CloudFormation stack.

After a few minutes the provisioning should be completed.

Fig7

After selecting the AWS CloudFormation stack, click on the “Outputs” tab and copy the value for the “ARNRole” key, the “Region” key and the “Topic” key.

Fig8

Go Back to the Sigfox console and paste the values you copied from the “Output” section of the AWS CloudFormation stack. Please also fill out the “Json Body” field in the Sigfox console. This JSON represents the payload that will be sent to AWS IoT using the native connector and contains the payload from the connected device as well as some metadata. This is a point for future customization using the Sigfox documentation if you wish to do so.

{
  "device" : "{device}",
  "data" : "{data}",
  "time" : "{time}",
  "snr" : "{snr}",
  "station" : "{station}",
  "avgSnr" : "{avgSnr}",
  "lat" : "{lat}",
  "lng" : "{lng}",
  "rssi" : "{rssi}",
  "seqNumber" : "{seqNumber}"
}

Finally, click “Ok”.

Fig9

You now have successfully created your callback and can visualize the data sent to it.

Fig10

Now that the data is being sent to AWS IoT via the native connector, we will create an AWS IoT Rule to store the data into an Amazon DynamoDB table.

Start by logging into the Amazon DynamoDB table and then click “Create table”.

Fig11

Give the table the name “sigfox” and create a Partition Key “deviceid” as well as a Sort Key “timestamp”. Then create the table.

Fig12

After a couple minutes, the Amazon DynamoDB table is created. Now, go to the AWS IoT console and create a new rule.

Fig13

Now we will send every message payload coming from Sigfox in its entirety to the DynamoDB table. To do this we are using “*” as the attribute, “sigfox” as the topic filter, and no conditions.

Fig14

Next add an action, select “Insert a message into a DynamoDB table”.

Fig15

Select the Amazon DynamoDB table we created previously. In the Hash Key value input “${device}” and “${timestamp()}” for the Range Key value. With this configuration, each Device’s ID will represent a Hash Key in the table and data stored under that Hash Key will be ordered using the timestamp generated by the AWS IoT Rules Engine and used as the Sort Key. Finally, create a new role by clicking on the “Create a new role” button. Name it “dynamodbsigfox” and click again on the “Create a new role”, you can now select it in the drop-down list. Thanks to this IAM role, AWS IoT can push data on your behalf to the Amazon DynamoDB table using the “PutItem” permission.

Fig16

Add the action to the rule and create the rule. You should now be able to visualize the newly created rule in the AWS Console.

Fig17

The final step is to go back to the Amazon DynamoDB Console and observe the data sent from Sigfox to AWS IoT thanks to the native connector. That can be achieved by selecting your table and use the “item” tab to observe the items. Once you see the item tab, click on a record to see the payload value.

Fig18

Fig19

Using this example’s basic flow, you can now create other AWS IoT rules that route the data to other AWS services. You might want to perform archiving, analytics, machine learning, monitoring, alerting and other functions. If you want to learn more about AWS IoT, here are a few links that should help you:

Z#BRE testimony – Use case with Sigfox

Z#BRE has developed an IoT solution for social care based on AWS IoT and Sigfox: “Z#LINK for Social Care”. The goal is to improve efficiency of social care & create a social link for elderly people. Society is increasingly connected and people are sharing more real-time information with their community. In the context of elderly people, this means they are sharing information with their community, in particular about care they are receiving each day.

We have developed a smart object that enables the elderly community (relatives, neighbors, care companies, etc.) to inform their community in real-time whenever a care practitioner delivers care. These real-time insights coming from care data enable public institutions to work better with care companies and to optimize costs while improving care quality.

Thanks to Sigfox connectivity we have created an object that does not require any setup nor Internet connection and can work at least two years with 4 batteries. This object’s use of Sigfox is key when it comes to the simplicity and setup of the overall solution.

Thanks to that simple setup, Sigfox allow faster deployments time of the hardware. With low power consumption and the use of batteries, there is also no need for elderly people to plug or unplug the device, resulting in no risk that they will forget to recharge the device.

Our infrastructure is based on different AWS services as shown in the following diagram:

Fig20

Our customer, the public council of the Loiret (a department in France), saves 3 million euros per year thanks to the implementation of this solution. More than 10,000 elderly people were equipped over a period of 5 months and more than 70 home care associations were involved in the project. As a result, this initiative was shown to have brought better care quality to elderly people.

Please find more information at https://zbre.io/

Next steps

The release of this native connector is the first step in making it easier for customers to connect Sigfox-enabled objects to the AWS Cloud in order to make use of all the Cloud Computing services available on the AWS platform.

We are actively listening to any feedback from customers to continue iterating on this integration in the future and to add more capabilities. Please reach out to sigfox@amazon.com to provide feedback.

As the Sigfox network is growing fast globally, and the AWS IoT platform is adding new features, we are really looking forward to see what new projects customers will be deploying!

 

IoT for Non-IP Devices

Connected devices have found their way into a myriad of commercial and consumer applications. Industries have already moved, or are in the process of moving to, operational models that require them to measure broad data points in real time and optimize their operations based on their analysis of this data. The move to smart connected devices can become expensive if expensive components must be upgraded across the infrastructure. This blog post explores how AWS IoT can be used to gather remote sensor telemetry and control legacy non-IP devices through remote infrared (IR) commands over the Internet.

In agriculture, greenhouses are used to create ideal growing conditions to maximize yield. Smart devices allow metrics like light level, temperature, humidity, and wind to be captured not just for historical purposes, but to react quickly to a changing environment. The example used in this blog post involves gathering light readings and activating an infrared-controlled sun shade in the greenhouse based on the current illuminance levels. A lux sensor will be placed directly under the area that we are going to control. Readings will be captured on a minute-by-minute basis. For more complex implementations, you can configure additional sensors and control devices.

Solution Architecture

 

AWS solution architecture

 

The IoT implementation has the following features:

  • Measures and transmits telemetry once a minute.
  • Uses TLS encryption to send telemetry.
  • Monitors telemetry and issues alarms when thresholds are exceeded.
  • Event notifications are delivered to mobile device through SMS messages.
  • IR commands over Ethernet are sent to operate the greenhouse controls.
  • Telemetry is logged for reporting purposes.

The implementation includes the following hardware components:

We’re using the MQTT protocol because it is a lightweight yet reliable mechanism for sending telemetry. You can access other AWS services through IoT actions. In this implementation, we used actions to integrate with Amazon CloudWatch and Amazon DynamoDB. CloudWatch logs the telemetry and then raises an alarm if a threshold is breached. Amazon SNS invokes a Lambda function, which sends the IR command in an SNS topic to the remote devices. DynamoDB is used as a long-term, durable store of historic telemetry for reporting purposes.

AWS Services Setup

This implementation uses several AWS services to create an end-to-end application to monitor and control greenhouse devices. In addition to the configuration of each service, we also need to create the roles and policies that will allow these services to work together.

IAM

We use IAM roles to provide the appropriate amount of access to the AWS services.

Create the CloudWatch role

Step 1. Create the CloudWatch Events role for AWS IoT to use.

Copy and paste the following into a file named

aws_iot_role_policy_document.json.

This document contains a policy that will ensure that the

aws_iot_cloudwatchMetric

role we create in the next step can assume this role.

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {"Service": "iot.amazonaws.com"},
    "Action": "sts:AssumeRole"
  }
}

Step 2. Create an IAM role named aws_iot_cloudwatchMetric.

This is the identity used by the AWS IoT action to send telemetry to CloudWatch.

From the command line, run the following command.

aws iam create-role --role-name aws_iot_cloudwatchMetric --
assume-role-policy-document 
file://aws_iot_role_policy_document.json

Upon successful execution of this command, an ARN for this role will be returned. Make a note of the ARN for the

aws_iot_cloudwatchMetric.

You will need it during the IoT action setup.

Step 3. Create a policy document named

aws_iot_cloudwatchMetric.json.

It will allow the

aws_iot_cloudwatchMetric

role to access Amazon CloudWatch.

{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "cloudwatch:PutMetricData",
        "Resource": [
           "*"
        ]
    }
}

Step 4. Attach

aws_iot_cloudwatchMetric.json

to the

aws_iot_cloudwatchMetric

role.

aws iam put-role-policy --role-name aws_iot_cloudwatchMetric --
policy-name aws_iot_cloudwatch_access --policy-document 
file://aws_iot_cloudwatchMetric.json

Create the Lambda role

Now we’ll create a second role that will allow AWS Lambda to execute our function.

Step 1. Copy and paste the following to a file named aws_lambda_role_policy_document.json.

This document contains a policy that will allow AWS Lambda to assume the role we will create in the next step.

{
   "Version": "2012-10-17",
   "Statement": {
     "Effect": "Allow",
     "Principal": {"Service": "lambda.amazonaws.com"},
     "Action": "sts:AssumeRole"
   }
}

Step 2. Create an IAM role named aws_lambda_execution.

This is the identity used by Lambda to execute the function.

aws iam create-role --role-name aws_lambda_execution --assume-
role-policy-document file://aws_lambda_role_policy_document.json

Upon successful execution of this command, an ARN for this role will be returned. Make a note of the ARN for the

aws_lambda_execution

role. You will need it during the Lambda setup.

Step 3. Create the policy document named aws_lambda_execution.json

that will allow the

aws_lambda_execution

role to put events into CloudWatch.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

Step 4. Attach the

aws_lambda_execution.json

to the

aws_lambda_execution

role.

aws iam put-role-policy --role-name aws_lambda_execution --
policy-name aws_iot_lambda_access --policy-document 
file://aws_lambda_execution.json

Create the DynamoDB role

In order to store the telemetry to a persistent data store, we will create a role for AWS IoT to use.

Step 1. Create the Lambda execution policy document. Copy and paste the following to a file named aws_dynamodb_role_policy_document.json.

This document contains a policy that will allow DynamoDB to assume this role.

{
   "Version": "2012-10-17",
   "Statement": {
     "Effect": "Allow",
     "Principal": {"Service": "iot.amazonaws.com"},
     "Action": "sts:AssumeRole"
   }
}

Step 2. Create an IAM role named aws_iot_dynamoDB.

This is the identity used by AWS IoT to send telemetry to DynamoDB.

aws iam create-role --role-name aws_iot_dynamoDB --assume-role-
policy-document file://aws_iot_dynamoDB_role_policy_document.json

Upon successful execution of this command, an ARN for this role will be returned. Make a note of the ARN for the

aws_iot_dynamoDB

role. You will need it during the DynamoDB setup.

Step 3. Create a policy document named aws_iot_dynamoDB.json

that will allow the

aws_iot_dyanmoDB

role to execute.

{
   "Version": "2012-10-17",
   "Statement": {
     "Effect": "Allow",
     "Action": "dynamodb:PutItem",
     "Resource": "arn:aws:dynamodb:us-east-1:000000000000:table/IoTSensor"
   }
}

Step 4. Attach

aws_iot_dynamoDB.json

to the

aws_iot_dynamoDB

role.

aws iam put-role-policy --role-name aws_iot_dynamoDB --policy-
name aws_iot_dynamoDB_access --policy-document 
file://aws_iot_dynamoDB.json

Now that the IAM roles and policies are in place, we can configure AWS IoT and the associated rules.

Set up AWS IoT

Let’s set up AWS IoT as the entry point for device communications. As soon as AWS IoT is communicating with the greenhouse sensors, we will use the AWS IoT rules engine to take further action on the sensor telemetry. The AWS IoT rules engine makes it easy to create highly scalable solutions that integrate with other AWS services, such as DynamoDB, CloudWatch, SNS, Lambda, Amazon Kinesis, Amazon ElasticSearch Service, Amazon S3, and Amazon SQS.

Create a thing

From the AWS CLI, follow these steps to create a thing.

Step 1. Create a thing that represents the lux meter.

aws iot create-thing --thing-name "greenhouse_lux_probe_1"

Step 2. Create the policy.

Start by creating a JSON policy document. It will be linked to the

create policy

statement. Copy and paste the following into a document. Be sure to replace 000000000000 with your AWS account number.

   "Version": "2012-10-17",
   "Statement": [
{
       "Effect": "Allow",
       "Action": [
         "iot:Connect"
       ],
       "Resource": [
         "arn:aws:iot:us-east-1:000000000000:client/${iot:ClientId}"
       ]
     },
     { 
        "Effect": "Allow",
        "Action": [
          "iot:Publish"
        ],
        "Resource": [
          "arn:aws:iot:us-east-1:000000000000:topic/Greenhouse/${iot:ClientId}"
      ]
    }
  ]
}

Now, run the following command to create the policy. Be sure to include the full path to the policy document.

aws iot create-policy --policy-name "greenhouse_lux_policy" --
policy-document file://iot_greenhouse_lux_probe_policy.json

Step 3. Create a certificate.

Creating a certificate pair is a simple process when you use the AWS IoT CLI. Use the following command to create the certificate, mark it as active, and then save the keys to the local file system. These keys will be required for authentication between the thing and AWS IoT.

aws iot create-keys-and-certificate --set-as-active --
certificate-pem-outfile IoTCert.pem.crt --public-key-outfile 
publicKey.pem.key --private-key-outfile privateKey.pem.key

Step 4. Attach the thing and policy to the certificate.
Using the following as an example, replace 000000000000 with your AWS account number and 22222222222222222222222222222222222222222222 with your certificate ARN. This will attach the thing to the certificate.

aws iot attach-thing-principal –thing-name
greenhouse_lux_probe_1 –principal arn:aws:iot:us-east-
1:000000000000:cert/22222222222222222222222222222222222222222222

Now, attach the policy to the certificate.

aws iot attach-principal-policy --policy-name 
greenhouse_lux_policy --principal arn:aws:iot:us-east-
1:000000000000:cert/22222222222222222222222222222222222222222222

Now that you have created a thing, policy, and certificate, you might also want to test connectivity to AWS IoT using a program like aws-iot-elf, which is available from the AWS Labs Github repo. After you have confirmed connectivity, you can build out the remainder of the application pipeline.

Configure the AWS IoT rules engine

Creating rules is an extremely powerful and straightforward way to build a responsive, extensible architecture. In this example, we will record and respond to telemetry as fast as we can record and report it. Letís imagine we need to ensure that the crop is not exposed to light intensity greater than 35,000 lux. First, we will integrate AWS IoT with CloudWatch, so it can be used to decide what to do based on the received telemetry. Two rules are required to support this case: one rule called TooBright and a second rule called NotTooBright.

Step 1. Create a JSON file named create-TooBright-rule.json

with the following content to serve as the rule policy. Be sure to use your AWS account number and the ARN for the

aws_iot_cloudwatchMetric

role.

{
   "sql": "SELECT * FROM '/topic/Greenhouse/LuxSensors' WHERE 
lux > 35000",
   "description": "Sends telemetry above 35,000 lux to 
CloudWatch to generate an alert",
   "actions": [
   {
       "cloudwatchMetric": {
           "metricUnit" : "Count",
           "roleArn": 
"arn:aws:iam::000000000000:role/aws_iot_cloudwatchMetric",
           "metricValue" : "1",
           "metricNamespace" : "Greenhouse Lux Sensors",
           "metricName" : "ShadePosition"
               }
           }
       ],
    "awsIotSqlVersion": "2016-03-23",
    "ruleDisabled": false
}

Step 2. From the command line, run this command to create the rule.

aws iot create-topic-rule --rule-name TooBright --topic-rule-
payload file://create-TooBright-rule.json

Step 3. Create a JSON file named create-NotTooBright-rule.json

with the following content to serve as the rule policy. Be sure to use the AWS account number and ARN for the

aws_iot_cloudwatchMetric

role that you created earlier. Change the WHERE clause to < 35000 and the metricValue to 0.

{
   "sql": "SELECT * FROM '/topic/Greenhouse/LuxSensors' WHERE 
lux < 35000",
   "description": "Sends telemetry above 35,000 lux to 
CloudWatch to generate an alert",
   "actions": [
   {
       "cloudwatchMetric": {
           "metricUnit" : "Count",
           "roleArn": 
"arn:aws:iam::000000000000:role/aws_iot_cloudwatchMetric",
           "metricValue" : "0",
           "metricNamespace" : "Greenhouse Lux Sensors",
           "metricName" : "ShadePosition"
               }
           }
       ],
   "awsIotSqlVersion": "2016-03-23",
   "ruleDisabled": false
}

Step 4. From the command line, run this command to create the rule.

aws iot create-topic-rule --rule-name NotTooBright --topic-rule-
payload file://create-NotTooBright-rule.json

Set up SNS

We will configure SNS to invoke the Lambda function and deliver an SMS message to a mobile phone. The SMS notification functionality is useful for letting the greenhouse operations team know the system is actively monitoring and controlling the greenhouse devices. Setting up SNS for this purpose is a simple process.

Step 1. Create the SNS topic.

aws sns create-topic --name Sunshades

The SNS service returns the ARN of the topic.

{
    "TopicArn": "arn:aws:sns:us-east-1:000000000000:Sunshades"
}

Step 2. Using the topic ARN and a phone number where the SMS message should be sent, create a subscription.

aws sns subscribe --topic-arn arn:aws:sns:us-east-
1:000000000000:Sunshades --protocol SMS --notification-endpoint "1 555 555 5555"

The SNS service confirms the subscription ID.

{
    "SubscriptionArn": "arn:aws:sns:us-east-
1:000000000000:Sunshades:0f1412d1-767f-4ef9-9304-7e5a513a2ac1"
}

Set up Lambda

We are going to use a Lambda function written in Python to make a socket connection to the remote Ethernet-to-IR device that controls the sun shade.

Step 1. Sign in to the AWS Management Console, and then open the AWS Lambda console. Choose the Create a Lambda function button.

Step 2. On the blueprint page, choose Configure triggers.

Step 3. On the Configure triggers page, choose SNS. From the SNS topic drop-down list, choose the Sunshades topic.

Step 4. Select the Enable trigger check box to allow the SNS topic to invoke the Lambda function, and then choose Next.

AWS Lambda Console screenshot

 

Step 6. On the Configure function page, type a name for your function (for example, Sunshade_Open).

Step 7. From the Runtime drop-down box, choose Python 2.7.

Step 8. Copy and paste the following Python code to create the Lambda functions that will open the sun shades. Be sure to use the IP address and port of the remote Ethernet-to-IR communication device. Include the IR code for your device as provided by the manufacturer.

You can get the IR code through the learning function of the IR repeater. This process typically requires sending an IR signal to the IR repeater so that it can capture and save the code as binary. The binary values for the IR command are then sent as part of the IP packet destined for the IR repeater.

Lambda function to open the sun shade

#Lambda function to extend the sunshade
#when the lux reading is too high
import socket
def lambda_handler(event, context):
HOST = 'xxx.xxx.xxx.xxx'# The remote host
PORT = 4998             # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.sendall('sendir,1:1,15,37993,1,1,246,160,20,20,60,\r,\l')
data = s.recv(1024)
s.close()
print 'Received', repr(data)

In Role, choose an existing role. In Existing role, choose the

aws_lambda_execution

role you created earlier, and then choose Next.

AWS Lambda Python code

 

On the following page, review the configuration, and then choose Create function.

Choose the blue Test button and leave the default Hello World template as it is. Choose Save and Test to see if the function ran successfully. The Lambda function should have issued the remote IR command, so check to see if the sun shade device responded to the Lambda invocation. If the execution result is marked failed, review the logs on the test page to determine the root cause. If the Lambda function was successful but the sun shade did not move, double-check that you used the appropriate IR codes.

Now create the second Lambda function. ‘Sunshade_Close’ will be similar to ìSunshade_Open,’ except it will contain the IR code for closing the shade.

Set up CloudWatch

We send a metric value of either 0 or 1 from the AWS IoT action to CloudWatch to indicate whether the sun shade should be opened or closed. In this example, 0 indicates that the lux level is below 35,000 and the shades should be open. 1 indicates a higher lux level that requires the sun shades to be closed. Weíll have a problem if the power to the devices is cycled too frequently. Not only is this an inefficient way to control devices, it can also damage the equipment. For this reason, we will use CloudWatch alarms to set a threshold of 15 minutes to prevent devices from cycling between open and closed states too frequently. Each alarm will have triggers that respond to the value you put in the metric name type when you created the AWS IoT action.

The first alarm is called Trigger_SunShade_Open. This alarm will trigger when the variable ShadePosition value is greater or equal to 1 for 15 consecutive minutes. We will treat the ShadePosition value as a binary value where 1 indicates the lux is above the threshold and the sun shade should be opened. A value of 0 indicates that the sun shade should be closed. We define the period as a one-minute interval, which means the sun shade will change states no sooner than every 15 minutes. A second alarm called Trigger_SunShade_Close is created in the same way, except that the ShadePosition must be less than 1 for 15 minutes. Both alarms are configured with an action to send a notification to the appropriate SNS topic.

aws cloudwatch put-metric-alarm --alarm-name "Trigger_SunShade_Open" 
--namespace "Greenhouse Lux Sensors" --metric-name "ShadePosition" 
--statistic Sum --evaluation-periods "15" 
--comparison-operator "GreaterThanOrEqualToThreshold" 
--alarm-actions arn:aws:sns:us-east-1:000000000000:Sunshades 
--period "60" --threshold "1.0" --actions-enabled

Next, create the

Trigger_Sunshade_Close

alarm in a similar manner to

Trigger_SunShade_Open.

This alarm will trigger when the ShadePosition value is 1.

aws cloudwatch put-metric-alarm --alarm-name "Trigger_SunShade_Close" 
--namespace "Greenhouse Lux Sensors" --metric-name "ShadePosition" 
--statistic Sum --evaluation-periods "15" 
--comparison-operator "LessThanOrEqualToThreshold" 
--alarm-actions arn:aws:sns:us-east-1:000000000000:Sunshades 
--period "60" --threshold "0" --actions-enabled

Sign in to the AWS Management Console, open the CloudWatch console, and then look at the alarms.

Confirm the two alarms were created. Because of the 15-minute evaluation period, you need to wait 15 minutes to verify the alarms are working.

AWS CloudWatch Alarms showing insufficent data

Depending on the reported value of the ShadePosition variable, the state displayed for one alarm should be OK and the other should be ALARM.

After 15 minutes, we see the

Trigger_Sunshade_Close

alarm is in the OK state, which means the alarm has not been raised and therefore the sun shade should be not closed.

AWS CloudWatch Alarm screenshot

Conversely,

Trigger_Sunshade_Open

is in an ALARM state, which indicates the sun shade should be open.

AWS CloudWatch Alarm State

This alarm state should also have generated an SMS message to the mobile device that was configured in the SNS topic.

Set up DynamoDB

DynamoDB is the repository for the historical lux readings because of its ease of management, low operating costs, and reliability. We’ll use an AWS IoT action to stream telemetry directly to DynamoDB. To get started, create a new DynamoDB table.

aws dynamodb create-table --table-name Greenhouse_Lux_Sensor --
attribute-definitions AttributeName=item,AttributeType=S 
AttributeName=timestamp,AttributeType=S --key-schema 
AttributeName=item,KeyType=HASH 
AttributeName=timestamp,KeyType=RANGE --provisioned-throughput 
ReadCapacityUnits=1,WriteCapacityUnits=1

DynamoDB will return a description of the table to confirm it was created.

AWS IoT DynamoDB Action

Step 1. Create a JSON file named create-dynamoDB-rule.json

with the following content to serve as the rule policy. Use your AWS account number and the ARN for the

aws_iot_dynamoDB

role you created earlier.

{
   "sql": "SELECT * FROM '/topic/Greenhouse/LuxSensors/#'",
   "ruleDisabled": false,
   "awsIotSqlVersion": "2016-03-23",
   "actions": [{
       "dynamoDB": {
           "tableName": "Greenhouse_Lux_Sensor",
           "roleArn": 
"arn:aws:iam::000000000000:role/aws_iot_dynamoDB",
           "hashKeyField": "item",
           "hashKeyValue": "${Thing}",
           "rangeKeyField": "timestamp",
           "rangeKeyValue": "${timestamp()}"
       }
   }]
}

Step 2. From the command line, run this command to create the rule.

aws iot create-topic-rule --rule-name Lux_telemetry_to_DynamoDB -
-topic-rule-payload file://crate-dynamoDB-rule.json

Execute this command to verify that telemetry is successfully being sent to DynamoDB.

aws dynamodb scan --table-name Greenhouse_Lux_Sensor --return-
consumed-capacity TOTAL

This command will scan the DynamoDB table and return any data that was written to it. In addition, it will return a ScannedCount with the number of objects in the table. If the ScannedCount is 0, make sure that telemetry is being sent to and received by AWS IoT.

Summary

You now have a fully functional AWS IoT implementation that provides intelligent control of not-so-smart devices. You have also created a completely serverless solution that can serve a single device or billions of them, all without changing the underlying architecture. Lastly, charges for the services used in this implementation are based on consumption, which yields a very low TCO.

There are infinite uses for AWS IoT when you combine its cloud logic with the devices and sensors on the market. This post has shown the power of this AWS service can be extended to non-IP devices, which can now be managed and controlled as if they were designed for IoT applications.

Access Cross Account Resources Using the AWS IoT Rules Engine

The AWS IoT platform enables you to connect your internet-enabled devices to the AWS cloud via MQTT/HTTP/Websockets protocol. Once connected, the devices can send data to MQTT topic(s). Data ingested on MQTT topics can be routed into AWS services (like Amazon S3, Amazon SQS, Amazon DynamoDB, Amazon Lambda etc.), by configuring rules in AWS IoT Rules Engine.

This blog post explains how to set up rules for cross-account data ingestion, from an MQTT topic in one account, to a destination in another account. We will focus on the cross-account access from an MQTT topic (the source) to Lambda and SQS (the destinations).

The blog has been written with the assumption that you are familiar with AWS IoT and the Rules Engine, and have a fair understanding of AWS IAM concepts like users, role and resource-based permission.

We are going to use the AWS CLI to setup cross-account rules. If you don’t have AWS CLI installed, you can follow these steps. If you have the AWS CLI installed, make sure you are using the most recent version.

Why do you need cross-account access via rules engine?

Rules with cross-account access allow you to ingest data published on an MQTT topic in one account to a destination (S3, SQS etc.) in another account. For example, Weather Corp collects weather data using its network of sensors and then publishes that data on MQTT topics in its AWS account. Now, if Weather Corp wishes to publish this data to an Amazon SQS queue of its partner, Forecast Corp’s, AWS account, they can do so by enabling cross-account access via the AWS IoT Rules Engine.

How can you configure a cross-account rule?

Cross-account rules can be configured using the resource-based permissions on the destination resource.

Thus, for Weather Corp to create a rule in their account to ingest weather data into an Amazon SQS queue in Forecast Corp’s AWS account, the cross account access can be set up by means of the two step method stated below:

 

  1. Forecast Corp creates a resource policy on their Amazon SQS queue, allowing Weather Corp’s AWS account to sqs:SendMessage action.
  2. Weather Corp configures a rule with the Forecast Corp queue URL as its destination.

Note: Cross-account access, via AWS IoT Rules Engine, needs resource-based permissions. Hence, only destinations that support resource-based permission can be enabled for the cross-account access via AWS IoT Rules Engine. Following is the list of such destinations:

Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Storage Service (S3)
AWS Lambda

Configure a cross-account rule

In this section, configuration of a cross account rule to access an AWS Lambda function and Amazon SQS queue in a different account has been explained. We used the AWS CLI for this configuration.

Steps to configure a cross-account rule for AWS Lambda is different when compared to other AWS services that support resource policy.

For Lambda:

The AWS IoT Rules Engine, mandatorily requires resource-based policy to access Lambda functions; so a cross-account Lambda function invocation is configured just like any other IoT-Lambda rule. The process of enabling cross-account access for Lambda can be understood from the following example:

Assume that Weather Corp, using AWS account# 123456789012, wishes to trigger a Lambda function (LambdaForWeatherCorp) in Forecast Corp’s account (AWS account# 987654321012) via the Rules Engine. Further, Weather Corp wishes to trigger this rule when a message arrives on Weather/Corp/Temperature MQTT topic.

To do this, Weather Corp would need to create a rule (WeatherCorpRule) which will be attached to Weather/Corp/Temperature topic. To create this rule, Weather Corp would need to call the CreateTopicRule API. Here is an example of this API call via AWS CLI:

aws iot create-topic-rule --rule-name WeatherCorpRule --topic-rule-payload file://./lambdaRule

Contents of the lambdaRule file:

{
       "sql": "SELECT * FROM 'Weather/Corp/Temperature'", 
       "ruleDisabled": false, 
       "actions": [{
           "lambda": {
               "functionArn": "arn:aws:lambda:us-east-1:987654321012:function:LambdaForWeatherCorp"   //Cross account lambda
            }
       }]
}

Forecast Corp will also have to give the AWS IoT Rules Engine permission to trigger LambdaForWeatherCorp Lambda function. Also, it is very important for Forecast Corp to make sure that only the AWS IoT Rules Engine is able to trigger the Lambda function and that it is done so only on behalf of Weather Corp’s WeatherCorpRule (created above) rule.

To do this, Forecast Corp would need to use Lambda’s AddPermission API. Here is an example of this API call via AWS CLI:

aws lambda add-permission --function-name LambdaForWeatherCorp --region us-east-1 --principal iot.amazonaws.com --source-arn arn:aws:iot:us-east-1:123456789012:rule/WeatherCorpRule --source-account 123456789012 --statement-id "unique_id" --action "lambda:InvokeFunction"

Options:
–principal: This field gives permission to AWS IoT (represented by iot.amazonaws.com) to call the Lambda function.

–source-arn: This field makes sure that only arn:aws:iot:us-east-1:123456789012:rule/WeatherCorpRule rule in AWS IoT triggers this Lambda (no other rule in the same or different account can trigger this Lambda).

–source-account: This field makes sure that AWS IoT triggers this Lambda function only on behalf of 123456789012 account.

Note: To run the above command, IAM user/role should have permission to lambda:AddPermission action.

For Other Services

As of today, the Rules Engine does not use resource policy to access non-Lambda AWS resources (Amazon SQS, Amazon S3, Amazon SNS ). Instead, it uses IAM role to access these resources in an account. Additionally, AWS IoT rules can only be configured with roles from the same account. This implies, that a rule cannot be created in one account that uses a role from another account.

While, a role from another account cannot be used in a rule, a role can be set up in an account to access resources in another account. Also, for a cross-account role to work, you need a resource policy on the resource that has to be accessed across the account.

The process of rule creation with access to cross-account resources can be understood from the below example:

Let’s assume that Weather Corp, using AWS account# 123456789012, wishes to send some data to Amazon SQS (SqsForWeatherCorp) in Forecast Corp’s account (AWS account# 987654321012) via rules engine. If Weather Corp wishes to trigger this rule when a message arrives on Weather/Corp/Temperature MQTT topic.

To do this, Weather Corp would need to do the following things:

Step 1: Create an IAM policy (PolicyWeatherCorp) that defines cross-account access to SqsForWeatherCorp SQS queue. To do this, Weather Corp would need to call IAM’s CreatePolicy API. Here is an example of this API call via AWS CLI:

aws iam create-policy --policy-name PolicyWeatherCorp --policy-document file://./crossAccountSQSPolicy

Where the contents of crossAccountSQSPolicy file are below:

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": “unique”,
           "Effect": "Allow",
           "Action": [
               "sqs:SendMessage"
           ],
           "Resource": [
               "arn:aws:sqs:us-east-1:987654321012:SqsForWeatherCorp" //Cross account SQS queue
           ]
       }
   ]
}

Step 2: Create a role (RoleWeatherCorp) that defines iot.amazonaws.com as a trusted entity. To do this Weather Corp would need to call IAM’s CreateRole API. Here is an example of this API call via AWS CLI:

 

aws iam create-role --role-name RoleWeatherCorp  --assume-role-policy-document file://./roleTrustPolicy

Where the contents of roleTrustPolicy file are below:

{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Sid": "",
     "Effect": "Allow",
     "Principal": {
       "Service": "iot.amazonaws.com"
     },
     "Action": "sts:AssumeRole"
   }
 ]
}

Step 3: Attach policy to role. To do this, Weather Corp would need to call AttachRolePolicy API. Here is an example of this API call via AWS CLI:

aws iam attach-role-policy --role-name RoleWeatherCorp --policy-arn  arn:aws:iam::123456789012:policy/PolicyWeatherCorp

Step 4: Create a rule (WeatherCorpRule) that is attached to Weather/Corp/Temperature topic. To create this rule, Weather Corp would need to call CreateRule API. Here is an example of this API call via AWS CLI:

aws iot create-topic-rule --rule-name WeatherCorpRule --topic-rule-payload file://./sqsRule

Where the contents of sqsRule file are below:

{
       "sql": "SELECT * FROM 'Weather/Corp/Temperature'", 
       "ruleDisabled": false, 
       "actions": [{
           "sqs": {
               "queueUrl": "https://sqs.us-east-1.amazonaws.com/987654321012/SqsForWeatherCorp",
               "roleArn": "arn:aws:iam::123456789012:role/RoleWeatherCorp”, 
               "useBase64": false
           }
       }]
}

Note: To run the above command, IAM user/role should have permission to iot:CreateTopicRule with rule arn as resource. Also, it needs to have permission to iam:PassRole action with resource as role arn.

Further, Forecast Corp would need to give permission on SqsForWeatherCorp to Weather Corp’s account, using resource policy. This can be done using SQS’s add-permission API. Here is an example of this API call via AWS CLI:

aws sqs add-permission --queue-url https://sqs.us-east-1.amazonaws.com/987654321012/SqsForWeatherCorp --label SendMessagesToMyQueue --aws-account-ids 123456789012 --actions SendMessage

It is important to note, that by adding this resource policy, Forecast Corp not only allows AWS IoT rules engine to send messages to SqsForWeatherCorp, but also permits all users/roles in Weather Corp’s account (which have the policy to allow sqs:SendMessage to SqsForWeatherCorp) to send messages to SqsForWeatherCorp.

Once the above setup is done, all messages sent to Weather/Corp/Temperature (which is in WeatherCorp’s account) will be sent to SqsForWeatherCorp (which is in Forecast Corp’s account) using the rules engine.

Conclusion

In this blog, the process of creating AWS IoT rules with cross account destination has been explained. With the help of simple case scenarios, the process of creating rules for Lambda and SQS destinations, using AWS CLI, has been detailed in a step by step manner.

We hope you found this walkthrough useful. Feel free to leave your feedback in the comments.

Identify APN Partners to Help You Build Innovative IoT Solutions on AWS

AWS provides essential building blocks to help virtually any company build and deploy an Internet of Things (IoT) solution.  Building on AWS, you have access to a broad array of services including AWS IoT, a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices, and low-cost data storage with high durability and availability for backup, archiving, and disaster recovery options to meet virtually an infinite number of scenarios and use cases. For example, Amazon S3 provides scalable storage in the cloud, Amazon Glacier provides low cost archive storage, and AWS Snowball enables large volume data transfers.  No solution is complete without information being generated from the system data collected. Here, you can utilize Amazon Machine Learning for predictive capabilities which can enable you to gain business insights from the data you’ve collected. We strive to offer services commonly used to build solutions today, and regularly release new services purposely built to help you meet your IoT business needs today and in the future.

We are currently witnessing a major shift in how customers view their business. Customers across industries, including Financial Services, Manufacturing, Energy, Transportation, Industrial and Banking, are on a business transformation journey and are seeking guidance to help transform from product-centric to service-orientated companies, taking advantage of actionable insights they can drive through IoT.  Early adopters have already deployed a wide range of cloud-based IoT solutions, and many are seeking to optimize existing solutions. Some companies are just getting started.  Regardless of where your company is in your IoT journey, working with industry-leading AWS Partner Network (APN) Partners who offer value-added services and solutions on AWS can help you accelerate your success.

Today, we launched the AWS IoT Competency to help you easily connect to APN Partners with proven expertise and customer success to help meet your specific business needs.

What’s the value of the AWS IoT Competency for your firm?

The IoT value chain is complex, and has many “actors.” Successful IoT implementations require services and technologies not traditionally part of the Enterprise DNA. As you seek to find best-in-breed partners for your specific needs, whether they be identifying edge or gateway devices or software, a platform to acquire, analyze, and act on IoT data, connectivity for edge and gateway devices, or consulting services to help you architect and deploy your solution, we want to make sure we help you easily connect with Consulting and Technology Partners who can help.

APN Partners who have achieved the AWS IoT Competency have been vetted by AWS solutions architects, and have passed a high bar of requirements such as providing evidence of deep technical and consulting expertise helping enterprises adopt, develop, and deploy complex IoT projects and solutions. IoT Competency Partners provide proven technology and/or implementation capabilities for a variety of use cases including (though not limited to) intelligent factories, smart cities, energy, automotive, transportation, and healthcare.  Lastly, public customer references and proven customer success are a core requirement for any APN Partner to achieve the AWS IoT Competency.

Use Cases and Launch Partners

Congratulations to our launch IoT Technology Competency Partners in the following categories:

Edge:  Partners who provide hardware and software ingredients used to build IoT devices, or finished products used in IoT solutions or applications.  Examples include: sensors, microprocessors and microcontrollers, operating systems, secure communication modules, evaluation and demo kits.

  • Intel
  • Microchip Technology

Gateway: Partners who provide data aggregation hardware and/or software connecting edge devices to the cloud and providing on premise intelligence as well as connecting to enterprise IT systems.  Examples include hardware gateways, software components to translate protocols, and platforms running on-premises to support local decision making.

  • MachineShop

Platform Providers: Independent software vendors (ISVs) who’ve developed a cloud-based platform to acquire, analyze, and act on IoT data. Examples include device management systems, visualization tools, predictive maintenance applications, data analytics, and machine learning software.

  • Bsquare Corporation
  • C3 IoT
  • Splunk
  • PTC
  • Thinglogix

Connectivity: Partners who provide systems to manage wide-area connectivity for edge and gateway devices.  Examples include device and subscription management platforms, billing and rating systems, device provisioning systems, and Mobile Network Operators (MNOs) and Mobile Virtual Network Operators (MVNOs)

  • Amdocs, Inc.
  • Asavie
  • Eseye
  • SORACOM

Congratulations to our launch IoT Consulting Competency Partners!

  • Accenture
  • Aricent
  • Cloud Technology Partners
  • Mobiquity, Inc.
  • Luxoft
  • Solstice
  • Sturdy
  • Trek10

Learn More

Hear from two of our launch AWS IoT Competency Partners, MachineShop and C3 IoT, as they discuss why they work with AWS, and the value of the AWS IoT Competency for customers:

C3 IoT:

MachineShop:

Want to learn more about the different IoT Partner Solutions? Click here.