The Internet of Things on AWS – Official Blog

Samsung Selects AWS IoT for Cloud Print with Help from ClearScale


ClearScale was founded in 2011, with a focus on delivering best-of-breed cloud systems integration, application development, and managed services. We are an AWS Premier Consulting Partner with competencies in Migration, DevOps, Marketing & Commerce, Big Data, and Mobile. Our experienced engineers, architects, and developers are all certified or accredited by AWS.

We have a long track record of successful IoT projects and the proven ability to design and automate IoT platforms, build IoT applications and create infrastructure on which connected devices can easily and securely interact with each other, gather and analyze data and provide valuable insights to your business and customers.

Our firm is unique in that we offer a breadth of technical experience coupled with a holistic organizational view. This allows us to truly partner with our customers and translate complex business requirements into solid, scalable, cloud-optimized solutions. At ClearScale, we understand best practices for driving the maximum business value from cloud deployments.

Samsung partnered with our firm to launch a Cloud Solutions Platform for delivering robust infrastructure and printing solutions at cloud-scale for any device from any location. In order to architect the device management component of the platform, we conducted a competitive analysis between the AWS IoT and the incumbent solution based on Ejabberd messaging platform.

With the goal of this effort focused around delivering to Samsung a methodology that would allow them to get the most reliable printing services for their customer base, the analysis needed to leverage a key item; the device management component. This component handles the authentication and messaging between devices, in this case printers, and the Cloud infrastructure. In addition, it allows for collecting instrumentation data from the devices for later analysis which in turn would allow Samsung to understand the health and utilization of each device to identify issues that required remote troubleshooting and subsequent proactive maintenance.

High Level Application Overview: 

Defining the Test Rules

Working with Samsung, we defined a set of criteria for evaluating AWS IoT versus Ejabberd for their device management capability. The attributes were prioritized and weighted based on Samsung’s business requirements. While these key areas are applicable to any IoT evaluation the subsequent scoring methodology may differ somewhat depending on the client’s specific use case(s) and requirements.

The analysis needed to address two major areas: functional testing and load testing. For the functional testing, we wanted to compare the Eiabberds’ solution to AWS IoT evaluating each solution’s core capabilities, security posturing and the ubiquity of its technology. For the load testing, we needed to understand the availability, scalability, maintainability, performance and reliability of each solution so that the metrics gathered around each area of concern could be applied to a scoring matrix as shown below.

* A score was awarded for each quality attribute, with a total score being the sum of all scores for the quality attributes. The maximum total score for a solution was deemed to be 100.

Functional Testing

Functional testing was performed first, with the goal of ensuring each system could fulfill the defined functional requirements, and only after which were the more expensive “load testing” performed. We deployed a small environment for Ejabberd and configured the AWS IoT service, so that they were functionally identical. Five functional tests were performed to validate the solutions and both solutions satisfied Samsung’s requirements without any issues.

Load Testing

Defining the Scenarios

Before comparing Ejabberd and AWS IoT, we needed to design the load testing criteria by opting to run two distinct scenarios.

  1. Simulate peak load conditions
  2. Demonstrate system stability

The message rates were calculated from the following profile:

  • Consumer (2-3 jobs per week)
  • SMB (10-20 jobs per week)
  • Enterprise (150-300 jobs per week)
  • Proposed distribution: 50%, 30%, 20%
  • Total number of agents: 500,000


AvgMsgs = MsgsPerJob * NumOfAgents * JobsPerWeek / SecondsPerWeek

= 2 * 500,000 * 300 / (7 * 24 * 60 * 60)
= 496.032


  • MsgsPerJob = Number of messages resulting from each job (2; see note)
  • AvgJobs = Average number of jobs per second
  • NumOfAgents = Total number of agents (500,000)
  • JobsPerWeek = Number of jobs a week per one agent
  • SecondsPerWeek = Number of seconds in a week (7 * 24 * 60 * 60)

Note: Results are doubled due to SCP behavior. For each job, XoaCommMtgSrv sends a PING message to an Agent. After the Agent executes the job, XoaCommMtgSrv sends another PING message to XCSP Service.


  • Number of jobs executed during busy hours: 90%
  • Number of busy hours per week: 10 (2 hours per day; 5 days per week)

MaxMsgs = MsgsPerJob * BusyHourJobs * NumOfAgents * JobsPerWeek / BusyHours

= 2 * 0.9 * 500,000 * 240 / 36,000
= 6,000


  • MsgsPerJob = Number of messages resulting from each job (2; see note)
  • BusyHourJobs = Percentage of jobs expected to be executed during busy hours (90% = 0.9)
  • NumOfAgents = Total Number of agents (500,000)
  • JobsPerWeek = Number of jobs a week per one agent
  • BusyHours = Number of seconds in busy hours a week (2 * 5 * 3600)

Load Generation

We selected Apache JMeter as our load generation engine. It is an extensible solution with which customized tests are easy to develop. The product is widely used and has strong community support.

“The Apache JMeter™ application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. Apache JMeter may be used to test performance on static/dynamic resources and dynamic web applications. It can be used to simulate a heavy load on a server, group of servers, network or object to test its strength or to analyze overall performance under different load types.”

Ejabberd and AWS IoT utilize different protocols, so we developed custom plugins for Apache JMeter (XMPP and MQTT, respectively). The plugins allowed us to create custom logging for deeper analysis and address connection persistence and manage secure connections. Our goal was to have the load generation closely emulate the actual system functionality including connection security and persistence. This included requests/messages from devices (Agents) as well as requests/responses from the Samsung’s device management application (XoaCommMtgSrv).

By using an existing tool and extending its functionality, we reduced the overall time needed to develop the load generation code. The following custom JMeter plugins were created to provide capabilities required by the test methodology:

  • MQTT protocol plugin for JMeter – used for AWS IoT testing
  • XMPP protocol plugin for JMeter – used for Ejabberd testing

There are several reasons to use custom plugins:

  • The test model can more closely emulate the actual system
  • Emulate a few number of XoaCommMtgSrv servers and a huge number of Agents
  • Support persistent connections – not supported by existing plugins
  • Support secure connections – not supported by existing plugins

Custom logging

  • Distinguish XoaCommMtgSrv server actions from Agent actions
  • Associate specific JMeter engine node to the XoaCommMtgSrv/Agent a log messages
  • Capture job execution sequences and identify out-of-order job processing
  • Enable low level debugging

The JMeter test plans for each solution have the same high-level behavior:

While testing the JMeter MQTT plugin, we determined that a single JMeter engine node was capable of emulating 8,000 agents without a performance bottleneck. In order to emulate 500,000 agents, as called for by the test methodology, we used 64 JMeter engine nodes for AWS IoT load generation.

While testing the JMeter XMPP plugin, it was discovered that a single JMeter engine node was capable of simulating 6,500 agents without a performance bottleneck. In order to emulate 500,000 agents as called for by the Test Methodology, we used 80 JMeter engine nodes for Ejaberd load generation. This was an important step to ensure that the metrics were not skewed by limitations on the load generation side of the equation.

We deployed the JMeter management node and engine nodes on C4.xlarge EC2 instances. The JMeter cluster was deployed within a single Availability Zone (AZ) for simplicity.

Test Execution

Preparing to load test AWS IoT (MQTT message broker) was a straight forward process. We configured the service and AWS handled all of the resources and scaling behind the scenes. To properly simulate unique devices, we generated 512,000 client certificates and policy rules. These certificates and policies were required for clients to authenticate to the MQTT message broker provided by AWS IoT.

Preparing the Ejabberd environment took a bit more effort; we needed to conduct single node load tests to identify suitable instance sizes and maximum capacity of each node. They elected to run the full load tests against two instance types and deployed two Ejabberd clusters (attached to MySQL on EC2) using c4.2xlarge instances with 9 nodes and c4.4xlarge instances with 4 nodes. In order to replicate real-world scenarios, we provisioned an extra node per cluster for HA purposes.

For Stability and Busy Hours testing, the following configurations were used:

  • c4.2xlarge with 9 nodes
  • c4.4xlarge with 4 nodes

Table: Ejabberd Single Node Limits

The common bottleneck for both instance types is “Auth Rate”. To be able to support 1,500 auth/sec it’s needed to have 3 c4.2xlarge instances. Because of High Availability requirement, we added 1 extra instance for a total of 4 nodes in the cluster. We used the same formula to calculate the 9 node cluster of c4.2xlarge instances.

We ran two iterations of the Peak test scenario and two iterations of the Stability test scenario in order to compare results. They cleared the JMeter engines of previous test data and temporary files and restarted the instances to ensure the load generation platform was clean and would provide accurate and reliable results from one test run to the next without having the results skewed by previous test result data.

Test Results


General Information

Both test cases for AWS IoT were passed. The number of errors was less than 0.01%.

Table: AWS IoT Load Test Results

“Error Distribution” diagrams show cumulative number of errors that happen during time. The relationship is almost linear.

Stability Load Testing

Table: Stability Testing – Summary

Diagram: Stability Testing – Message Latency Histogram


Histograms for all tests represent a distribution of message latency (the amount of time needed to send a message from a publisher to a subscriber). The values will differ from real values because testing environment is located in the same Region as tested services. But in a real life scenarios, agents will be global so Internet related delays will apply.

The purpose of the histograms presented in this document is to show if there are any delays related to buffering or overload (service degradation)

Diagram: Stability Testing – Error Distribution (Cumulative)

Busy Hour Load Testing

Table: Busy Hour Load Testing – Summary

Diagram: Busy Hour Load Testing – Message Latency Histogram

Diagram: Busy Hour Load Testing – Error Distribution (Cumulative)


During the first test 1712 threads lost their connection (16-37 threads on each engine node) between 22:39:17 – 22:41:52 UTC. Threads were reconnected to different AWS IoT endpoint IP’s.

All threads reconnected successfully, but only after the message receive timeout. In this case AWS IoT was dropping messages because there were no agents subscribed to topics, and this can’t be considered as an AWS IoT error.

It was decided to normalize the first diagram by removing the data for that time period.­­

General Information
Stability and busy hour load test cases were for AWS Ejabberd both passed. The number of errors is less than 0.01%.

Stability Load Testing
Test case was executed twice for each instance size, and was passed without errors.

Table: Stability Testing – Summary


  • All tests were finished successfully
  • Test #1 for c4.4xlarge was stopped because of the overtime. One message was not received due test stop

Diagram: Stability Testing – Message Latency Histogram (c4.2xlarge)

Diagram: Stability Testing – Message Latency Histogram (c4.4xlarge)

Diagram: Stability Testing – Error Distribution (c4.2xlarge)

Diagram: Stability Testing – Error Distribution (c4.4xlarge)

Busy Hour Load Testing

Table: Stability Testing – Summary

Diagram: Busy Hour Load Testing – Message Latency (c4.2xlarge)

Diagram: Busy Hour Load Testing – Message Latency (c4.4xlarge)

Diagram: Busy Hour Load Testing – Error Distribution (c4.2xlarge)

Diagram: Busy Hour Load Testing – Error Distribution (c4.4xlarge)

Comparing Results
At the conclusion of the load testing we found the following:

The analysis showed that both solutions could provide very comparable services for the load profile and use cases.

Cost Analysis

We conducted a cost comparison based on capital expenses (CAPEX) and operational expenses (OPEX). For this particular analysis, they defined CAPEX as the cost of development and deployment of the given solution. OPEX was defined as monthly/yearly infrastructure and maintenance costs. For ease of calculations, they did not include human resource and common organizational expenses for this exercise.

CAPEX costs are based on actual work, performed by ClearScale, for other clients to develop and deploy similar solutions.

Upon further review it was apparent that the AWS IoT solution was extremely cost effective from a capital expenditure perspective. The huge difference in CAPEX costs also indicated that AWS IoT would take less time to deploy.


The AWS IoT solution scored higher in Availability, Maintainability and Cost. Ejabberd did score higher on Message Reliability which carried the lowest weight and priority on our scoring matrix based on the criteria and requirements provided by Samsung.

Table: AWS IoT Results Summary Table

Table: Ejabberd Results Summary Table

Samsung had two main objectives they were attempting to answer with this analysis:

  • “How does this affect our customers?” AWS IoT provides the availability, consistency, and security that deliver the best possible service. This enables Samsung to keep printers online and operational so that their customers can experience uninterrupted printing services.
  • “How does this affect our innovation?” (We can define innovation as the time a developer spends on creating new services) As we can see from the level of effort required to setup our testing environments, the AWS IoT solution is much easier to deploy than the Ejabberd clusters. We did not have any overhead for performance tuning or system scaling. The best part of AWS IoT is that there is zero maintenance effort moving forward. The time and money saved can be redirected to creating new products and features for customers.

We were able to demonstrate to Samsung that the better solution was AWS IoT. By reviewing the test results and comprehensive cost analysis, they were able to provide a solution to Samsung that would meet the requirements that were set forth, provide a solution that would scalable and maintainable, and deliver an improved customer experience by leveraging new and innovative technologies.

Learn more about ClearScale IoT

How to route messages from devices to Salesforce IoT Cloud

AWS IoT customers can now route messages from devices directly to Salesforce IoT Cloud with a new AWS IoT Rules Engine Action that requires only configuration.

As part of the strategic relationship between AWS and Salesforce, the combination of AWS IoT and Salesforce IoT Cloud allows you to enrich the messages coming from your devices with customer data residing in Salesforce. This results in deeper insights and allow customers to act on those newly created insights within the Salesforce ecosystem.

In this article, we are going to walk you through a step by step example so you can learn how to configure and test this new action type.

Bring case management to your connected devices

We are going to take an industrial solar farm as an example, which is inspired from a demonstration that took place at Re:Invent 2016.

This demonstration showcases AWS IoT-connected products reporting a critical failure. As a result, a new record in the case management system gets created in Salesforce Service Cloud which instructs a technician to go on-site, assess the situation and make repairs.

To learn more about it, visit the AWS YouTube channel.

Create an AWS IoT Rule with a Salesforce action type

Start by logging into the AWS IoT console.

Click on the Rules section and select Create a rule.


Name your rule solarPanelTelemetry and then enter a meaningful description.

We will create a simple rule to forward all the data coming from a solar panel to Salesforce IoT Cloud. Enter * as the Attribute of the rule to allow all data coming from the device to be passed on. Enter solarPanels/D123456 as the topic filter and leave the condition field blank.

Once you’re done click on Add action.


Select the Salesforce action type and click on Configure action.

Go to the Salesforce IoT Cloud console and copy/paste the value displayed on the Input Stream for the URL and the Token. To learn more about Input Streams please refer to the Salesforce documentation.


Click on Add action and review the AWS IoT Rule. You should see the Salesforce action you just added. Click on Create rule.


Test your configuration

We are going to test the AWS IoT Rule we just created by simulating a message coming from a solar panel. Go to the Test section of the AWS IoT Console.

Enter solarPanels/D123456 as the Subscription topic and push the Subscribe to topic button. This will enable you to verify that the sample message you are sending is published to the topic matching the rules’ configuration.

Next enter solarPanels/D123456 for the topic name in the Publish section and copy/paste the following JSON:

  "deviceId": "D123456",
  "volts": 70,
  "amps": 1.5,
  "watts": 90,
  "latitude": "45.0000",
  "longitude": "-122.0000",
  "timestamp": "1493750762445"

Finally, push the Publish to topic button to send the message.

If you want to monitor the rule’s execution, you can set up Cloudwatch Logs for AWS IoT.


Log into the Salesforce IoT Cloud console to see the message that was sent from AWS IoT.


Next steps

Refer to the AWS IoT developer documentation for more information on how to use this new action.

Or, sign into the AWS IoT console to try it.

To learn more about AWS IoT, visit the AWS website. To learn more about Salesforce IoT Cloud, visit the Salesforce website.

Understanding the AWS IoT Security Model

According to Gartner, the Internet of Things (IoT) has enormous potential for data generation across the roughly 21 billion endpoints expected to be in use in 2020(1). Internet of Things (IoT) devices in use. Its easy to get excited about this rapid growth and envisage a future where the digital world extends further into our world. Before you take the decision to deploy devices into the wild, it’s vital to understand how you will maintain your security perimeter.

In this post, I will walk you through the security model used by AWS IoT. I will show you how devices can authenticate to the AWS IoT platform and how they are authorized to carry out actions.

To do this, imagine that you are the forward-thinking owner of a Pizza Restaurant. A few years ago, most of your customers would have picked up the phone and actually spoken to you when ordering a pizza. Then it all moved on-line. You now want to give your customers a new experience, similar to the Amazon Dash Button. One press of an order button and you will deliver a pizza to your customer.

The starting point for your solution will be the AWS IoT Button. This is a programmable button based on Amazon Dash Button hardware. If you choose to use an AWS IoT Button, the easiest way to get up and running is to follow one of the Quickstart guides. Alternatively, you can use the Getting Started with AWS IoT section of the AWS Documentation.

Who’s Calling?

When someone presses an AWS IoT Button to order a pizza, it’s important to know who they are. This is obviously important as you will need to know where to deliver their pizza, but you also only want genuine customers to order. In much the same way as existing, on-line customers identify themselves with a username, each AWS IoT Button needs an identity. In AWS IoT, for devices using MQTT to communicate, this is done with an X.509 certificate.

Before I explain how a device uses an X.509 certificate for identity, it is important to understand public key cryptography, sometimes called asymmetric cryptography (feel free to skip to the next section if you are already familiar with this). Public key cryptography uses a pair of keys to enable messages to be securely transferred. A message can be encrypted using a public key and the only way to decrypt it is to use the corresponding private key:

A key pair is a great way for others to send you secret data: if you keep your private key secure, anyone with access to the public key can send you an encrypted message that only you can decrypt and read.

In addition, public and private keys also allow you to sign documents. Here, a private key is used to add a digital signature to a message. Anyone with the public key can check the signature and know the original message hasn’t been altered:

In addition to demonstrating a message hasn’t been tampered, a digital signature can be used to prove ownership of a private key. Anyone with the public key can verify a signature and be confident that when the message was signed the signer was in possession of the private key.

Create an Identity

An X.509 certificate is a document that is used to prove ownership of a public key. To make a new X.509 certificate you need to create a Certificate Signing Request (CSR) and give it to a Certificate Authority (CA). The CSR is a digital document that contains your public key and other identifying information. When you send a CSR to a CA it first validates that the identifying information you’ve supplied is correct, for example you may be asked to prove ownership of a domain by responding to an email. Once your identity has been verified, the CA creates a certificate and signs it with a private key. Anyone can now validate your certificate by checking its digital signature with the CA’s public key.

At this point you may be wondering why you should trust the CA and how you know the public key it gave you is genuine. The CA makes it easy to prove the ownership of its public key by publishing it in an X.509 certificate. The CA’s certificate is itself signed by another CA. This sets up a chain of trust where one CA vouches for another. This chain goes back until a self-signed root certificate is reached.

There are a small number of well-known root certificates. For example you can find lists of certificates that are installed in MacOS Sierra or available to Windows computers as part of the Microsoft Trusted Root Certification Program (free TechNet account needed to view). The chain of trust allows anyone to check the authenticity of any certificate by examining it all the way to a well-known, trusted root certificate:

Since each of your pizza order buttons will need a separate identity, you will need an X.509 certificate for each device. The diagram below shows how a new X.509 certificate is made for a device by AWS IoT. When creating a new certificate, you have three choices. The easiest (option 1 below) is to use the one-click generation. Here, AWS will create a public and private key and follow the process through to create a new certificate signed by the AWS IoT CA. The second option is to provide your own CSR. This has the advantage that you never give AWS sight of your private key. As with option 1, the new certificate generated from the CSR is signed by the AWS IoT CA. The final option is to bring your own certificate signed by your own trusted CA. This choice is best if you already generate your own certificates as part of your device manufacture or you already have a large number of devices in the field. You can find out more about using your own certificates in this blog post.

At the end of this you should be in possession of both the new device certificate and its private key. Whether you need to download these from AWS depends whether you choose option 1 (you need to download the certificate and the private key), option 2 (you just need to download the certificate) or option 3 (you already have both the certificate and the key, so don’t need to download anything).

At this point, you also need to get a copy of the root certificate used by the AWS IoT server. As you will see below, this is important when establishing an authenticated link with the AWS IoT service.

All three files (the private key, the device certificate and the AWS IoT server certificate) need to be put onto your pizza ordering button. Note that if you are using an AWS IoT Button, you don’t need to put the root certificate onto the device explicitly because it was put onto the device for you when it was manufactured.

Authenticating to AWS IoT

Now that the certificates and private key are on our AWS IoT Button, you are ready to establish a connection to AWS IoT and authenticate. The protocol used is Transport Layer Security (TLS) 1.2, which is the successor to Secure Sockets Layer (SSL). This is the same protocol that you use to securely shop or bank on the internet but, in addition to server authentication, the client also uses a X.509 certificate prove its identity.

The connection starts with the AWS IoT Button contacting the Authentication and Authorization component of AWS IoT with a hello message:

The hello message is the start of a TLS handshake, which will establish a secure communication channel between the AWS IoT Button and AWS IoT. During the handshake, the client and server will agree on a shared secret, rather like a password, which will be used to encrypt all messages. A shared secret is preferred over using asymmetric keys as it less expensive in terms of computing power needed to encrypt messages, so you can get better communication throughput. The hello message contains details of the various cryptographic methods that the AWS IoT Button is able to use.

When the server receives a hello message it picks the cryptographic method it wants to use to establish the shared secret and returns this, together with its server certificate, to the AWS IoT Button:

Now that the AWS IoT Button has a copy of the server certificate it can check that it is really talking to AWS IoT. It does that by using the AWS IoT Service root certificate, that you downloaded and put on the device. The public key that’s embedded in the root certificate is used to validate the digital signature on the server certificate:

If the digital signature checks out with the root certificate’s public key then the AWS IoT Button trusts that it has made contact with the AWS IoT service. It now needs to do two things; first it needs to authenticate itself with AWS IoT and second it needs to establish a shared secret for future communication.

To authenticate itself with AWS IoT, the AWS IoT Button first sends a copy of its device certificate to the server:

To complete the authentication process, the AWS IoT Button calculates a hash over all the communication records that are part of the current session with the AWS IoT Server. It then calculates a digital signature for this hash using its private key:

The digital signature is then sent to AWS IoT.

AWS IoT is now in possession of the devices’ public key (which was in the device certificate) and the digital signature. Whilst the TLS handshake has been proceeding, the AWS IoT Service has also been keeping a record of all communication and calculates the same hash as the AWS IoT Button. It uses the device’s public key to check the accuracy of the digital signature:

If the signature checks out, AWS IoT can be confident that it is talking to a pizza ordering device belonging to one of your customers. By using the unique identifier of the certificate, it knows exactly which device is establishing a MQTT session.

The exact method by which a shared secret is established depends on the key exchange algorithm that the server and client agreed on at the beginning of the handshake. However, the process is started by the AWS IoT Button encrypting a message using the server’s public key (which it got from the server’s certificate). The message might be a pre-master-secret, a public key or nothing. This is sent to the server and can be decrypted using the server’s private key. Both the server and the AWS IoT Button then use the contents of the message to establish a shared secret without needing further communication. From then on, all messages between the device and AWS IoT are secured using the shared secret.

Permission to Order

The pizza order button has used its X.509 certificate to prove its identity and secure the messages it exchanges with AWS IoT. It is now ready to order pizza. Each AWS IoT Button publishes MQTT messages to its own topic, for example:


The second part is the serial number of the AWS IoT Button. It’s important that your system implements least privilege security and only permits an AWS IoT Button to publish to its own topic. For example, a nefarious customer could re-program their button to publish to a neighbor’s topic. When the pizza turns up, it’s simple social engineering to intercept the delivery and claim a free meal.

As you’ve seen, a device certificate is similar to a user’s username; it’s their identity. To give this identity permissions, you need to attach a policy to the certificate, in much the same way as you would attach permissions or policies to an IAM user.

The default policy for an AWS IoT Button is shown below. The default policy grants the owner of the certificate rights to publish to the topic specified in the ‘Resource’ attribute.

In this policy, the serial number is hard-coded. This solution will not scale well as you will need a separate policy for each AWS IoT Button.

Fortunately, the policy language can help us with variable substitutions. For example, the following policy can be applied to all our devices. Instead of hard coding the serial number, the AWS IoT Service obtains it from the certificate that was used to authenticate the device. This assumes that when you created the certificate, the serial number was part of the identifying information.

You can check out the documentation for further information on AWS IoT Policies and the substitution variables that you can use.


In this post, I have introduced you to the AWS IoT Security model and showed you how devices are authenticated against the service and how devices can be authorised to carry out actions.

You can purchase your own AWS IoT Button here or, if you plan a more sophisticated solution, you may want to check out this page that has lots of idea for getting started on AWS IoT, including some starter kits.

If you have any questions or suggestions, please leave a comment below.


(1) Gartner, Press Release, Gartner Reveals Top Predictions for IT Organizations and Users in 2017 and Beyond, October 2016,


How AI Will Accelerate IoT Solutions for Enterprise

Artificial intelligence (AI) is going mainstream, which has long-lasting implications for how enterprises build and improve their products and services.

As announced at re:Invent 2016, AWS is simplifying artificial intelligence (AI) adoption through three low-cost, cloud-based services built for AI-specific uses cases. Instead of creating proprietary algorithms, data models or machine learning techniques, all levels of developers from Global 2000 enterprises through start-ups can leverage the Amazon Lex, Amazon Rekognition and Amazon Polly APIs to innovate quickly and build new Internet of Things (IoT) product and service offerings. Accenture, is delivering these innovative offerings by supporting clients with vertical industry applications powered by Amazon AI.

Combining AI with IoT is essential because it enables businesses to collect data in the physical world–from wearables, appliances, automobiles, mobile phones, sensors and other devices—and add intelligence to deliver a better response or outcome. In other words, AI is the automation brainpower to make IoT device-driven data more useful.

For example, a telecommunications company could create an AI-powered mobile chat bot to automate customer service processes. One use case would be to monitor incoming IoT data from cable boxes installed in homes. If a device started to malfunction, the mobile chat bot could notify the customer via text or voice interaction of a possible service issue, and offer the convenience of scheduling a service technician. This device-driven data could leverage AWS Lambda for serverless functions, as well as AWS Greengrass for embedded software on the edge. Thereby, leveraging the use of AWS cloud as needed when processing, storing and computing.

API functionality overview and real-world uses

Used separately or in combination, developers can embed the APIs into existing smart product and service roadmaps, or inject them into cloud-native programming processes.

  • Amazon Lex—AI-driven processing engine that computes voice input or sensor data to better understand and personalize an experience or outcome (part of Alexa voice platform)
  • Amazon Rekognition—Image and facial analysis to detect and understand environment and what is happening in real-life scenario or picture
  • Amazon Polly—Text-to-speech service that synthesizes structured text data into natural voice-like capability (in male or female voice and in 24 languages) to enrich response.

Today, businesses typically run analytics in the cloud on transactional datasets, such as customer purchases or location-based information. But, IoT data combined with AI provides a deeper level of insights. By collecting real-time data from IoT devices (or what is known as device-driven data), a business can use an AI engine to automate the information processing and connect different sources of unstructured/structured data to contextualize what a person is asking for. From this understanding, the machine can provide a personalized response or experience directly to the end user, or route the response back into the enterprise to automate another process.

This capability opens an entirely new set of IoT-based product or service offerings. Accenture recently released their Technology Vision 2017, which explains the benefits of AI for the Enterprise. For instance, a healthcare business could implement Amazon Lex and Amazon Rekognition to improve the process of monitoring house-bound or elderly patients who need assisted living. In one use case, the service could install a video camera to take pictures of an individual, analyze the images in the cloud to keep track of movements, and send an automatic alert to a healthcare giver or family member if the patient has not moved in a specified amount of time or has fallen.

Expanding AI and IoT opportunities

In the future, AI combined with IoT will introduce even more scenarios in which robots (aka automated machines) collaborate with people to supply intelligent information and augment human interaction. This will help people to complete tasks more efficiently, interact in a more personalized way or supply on-demand services.

In a retail setting, for example, a business could create a collaborative artificial intelligence (“cobot”) application using Amazon Lex and Amazon Rekognition that analyzes facial features of in-store shoppers in real-time and combines this information with purchase transaction history. The cobot could then prompt sales associates to offer customized help to each customer as they choose items. Or in a hospital situation, Amazon Lex and Amazon Rekognition could be built into an application that uses AI and cloud-based big data, all connected with IoT, to help physicians better diagnose their patients. Examples include detecting skin anomalies with image analysis or stress-related symptoms.

AWS’s new AI-driven APIs, developing IoT products and services with AI capabilities is becoming cost effective and accessible for all businesses and leveraging Accenture to deliver new, applied, solutions give Enterprises a quick way to adopt at scale.


Connect your devices to AWS IoT using the Sigfox network

Connectivity is a key element to evaluate when designing IoT systems as it will weigh heavily on the performance, capabilities, autonomy of battery powered objects, and cost of the overall solution. There is no one network that fits all scenarios which is why AWS partners with many different network providers. By partnering, you can then choose the most relevant network to satisfy your business requirements. In this blog post, we’ll explore providing LPWAN connectivity to your objects using the Sigfox network. Pierre Coquentin (Sigfox – Software Architect) will explain what Sigfox is and how to connect objects while Jean-Paul Huon (Z#bre – CTO) will share his experience using Sigfox with AWS in production.

Why Sigfox?

Sigfox provides global, simple, cost-effective, and energy-efficient solutions to power the Internet of Things (IoT). Today, Sigfox’s worldwide network and broad ecosystem of partners are already enabling companies to accelerate digital transformation and to develop new services and value.

In order to connect devices to its global network, Sigfox uses an ultra-narrow-band (UNB) radio technology. The technology is key to providing a scalable, high-capacity network with very low energy consumption, while maintaining a light and easy-to-rollout infrastructure. The company operates in the ISM bands (license-free frequency bands), on the 902MHz band in the U.S., as well as the 868MHz band in Europe.

Once devices are connected to the Sigfox network, data can be transmitted to AWS IoT, enabling customers to create IoT applications that deliver insight into and the ability to act upon their data in real-time.

Please find more information at

Send data from Sigfox to AWS IoT

We’ll start from the assumption that you already have objects connected and sending data to the Sigfox network. All that is left to do, is to configure the native AWS IoT connector to push your data to the AWS Cloud. To make things a bit more interesting, we will store all the data sent by your devices in an Amazon DynamoDB table.


In order to implement this architecture, we are going to perform the following steps:

  • Configure the AWS IoT Connector in the Sigfox Console
  • Provision the necessary resources on AWS so Sigfox can send data into your AWS account securely through the AWS IoT connector using a CloudFormation script that will generate IAM roles and permissions.
  • Manually create a rule in AWS IoT and a DynamoDB table so we can store the data coming from Sigfox into the DynamoDB table

In our example, we are using the US East 1 region. We recommend you go through this tutorial once by using the exact same configuration. Once you gain knowledge on how to configure the different pieces, then customize the implementation to fit your needs.

First, log into the Sigfox console, go to the “Callbacks” section and click on the “New” button to create a new “Callback”.


Now select the “AWS IoT” option as the type of “Callback”.


Please copy the “External Id” given to you in your clipboard, it will be useful later. The “External Id” is unique to your account and enables greater security when authorizing third party to access your AWS resources, you can find more information here.

Next click on “Launch Stack” and leave the “CROSS_ACCOUNT” option selected.


This will redirect you to the AWS CloudFormation console, click “Next” on the first screen.


On the following screen, enter the following inputs:

  • Stack name: Choose a meaningful name for the connector.
  • AWSAccountId: Input your AWS Account Id, you can find it here.
  • External Id: Copy/paste the external Id given to you in the Sigfox console.
  • Region: Choose the region where AWS IoT will be used.
  • Topic Name: Choose the topic name you wish to send data to.

Click “Next” once you are ready.


The next screen is optional, if you wish you can customize options (Tags, Permissions, Notifications) otherwise click on “Next” to continue with the default options. You should now be on the review screen, check the “I acknowledge that AWS CloudFormation might create IAM resources”  box and click on “Create” to launch the CloudFormation stack.

After a few minutes the provisioning should be completed.


After selecting the AWS CloudFormation stack, click on the “Outputs” tab and copy the value for the “ARNRole” key, the “Region” key and the “Topic” key.


Go Back to the Sigfox console and paste the values you copied from the “Output” section of the AWS CloudFormation stack. Please also fill out the “Json Body” field in the Sigfox console. This JSON represents the payload that will be sent to AWS IoT using the native connector and contains the payload from the connected device as well as some metadata. This is a point for future customization using the Sigfox documentation if you wish to do so.

  "device" : "{device}",
  "data" : "{data}",
  "time" : "{time}",
  "snr" : "{snr}",
  "station" : "{station}",
  "avgSnr" : "{avgSnr}",
  "lat" : "{lat}",
  "lng" : "{lng}",
  "rssi" : "{rssi}",
  "seqNumber" : "{seqNumber}"

Finally, click “Ok”.


You now have successfully created your callback and can visualize the data sent to it.


Now that the data is being sent to AWS IoT via the native connector, we will create an AWS IoT Rule to store the data into an Amazon DynamoDB table.

Start by logging into the Amazon DynamoDB table and then click “Create table”.


Give the table the name “sigfox” and create a Partition Key “deviceid” as well as a Sort Key “timestamp”. Then create the table.


After a couple minutes, the Amazon DynamoDB table is created. Now, go to the AWS IoT console and create a new rule.


Now we will send every message payload coming from Sigfox in its entirety to the DynamoDB table. To do this we are using “*” as the attribute, “sigfox” as the topic filter, and no conditions.


Next add an action, select “Insert a message into a DynamoDB table”.


Select the Amazon DynamoDB table we created previously. In the Hash Key value input “${device}” and “${timestamp()}” for the Range Key value. With this configuration, each Device’s ID will represent a Hash Key in the table and data stored under that Hash Key will be ordered using the timestamp generated by the AWS IoT Rules Engine and used as the Sort Key. Finally, create a new role by clicking on the “Create a new role” button. Name it “dynamodbsigfox” and click again on the “Create a new role”, you can now select it in the drop-down list. Thanks to this IAM role, AWS IoT can push data on your behalf to the Amazon DynamoDB table using the “PutItem” permission.


Add the action to the rule and create the rule. You should now be able to visualize the newly created rule in the AWS Console.


The final step is to go back to the Amazon DynamoDB Console and observe the data sent from Sigfox to AWS IoT thanks to the native connector. That can be achieved by selecting your table and use the “item” tab to observe the items. Once you see the item tab, click on a record to see the payload value.



Using this example’s basic flow, you can now create other AWS IoT rules that route the data to other AWS services. You might want to perform archiving, analytics, machine learning, monitoring, alerting and other functions. If you want to learn more about AWS IoT, here are a few links that should help you:

Z#BRE testimony – Use case with Sigfox

Z#BRE has developed an IoT solution for social care based on AWS IoT and Sigfox: “Z#LINK for Social Care”. The goal is to improve efficiency of social care & create a social link for elderly people. Society is increasingly connected and people are sharing more real-time information with their community. In the context of elderly people, this means they are sharing information with their community, in particular about care they are receiving each day.

We have developed a smart object that enables the elderly community (relatives, neighbors, care companies, etc.) to inform their community in real-time whenever a care practitioner delivers care. These real-time insights coming from care data enable public institutions to work better with care companies and to optimize costs while improving care quality.

Thanks to Sigfox connectivity we have created an object that does not require any setup nor Internet connection and can work at least two years with 4 batteries. This object’s use of Sigfox is key when it comes to the simplicity and setup of the overall solution.

Thanks to that simple setup, Sigfox allow faster deployments time of the hardware. With low power consumption and the use of batteries, there is also no need for elderly people to plug or unplug the device, resulting in no risk that they will forget to recharge the device.

Our infrastructure is based on different AWS services as shown in the following diagram:


Our customer, the public council of the Loiret (a department in France), saves 3 million euros per year thanks to the implementation of this solution. More than 10,000 elderly people were equipped over a period of 5 months and more than 70 home care associations were involved in the project. As a result, this initiative was shown to have brought better care quality to elderly people.

Please find more information at

Next steps

The release of this native connector is the first step in making it easier for customers to connect Sigfox-enabled objects to the AWS Cloud in order to make use of all the Cloud Computing services available on the AWS platform.

We are actively listening to any feedback from customers to continue iterating on this integration in the future and to add more capabilities. Please reach out to to provide feedback.

As the Sigfox network is growing fast globally, and the AWS IoT platform is adding new features, we are really looking forward to see what new projects customers will be deploying!


IoT for Non-IP Devices

Connected devices have found their way into a myriad of commercial and consumer applications. Industries have already moved, or are in the process of moving to, operational models that require them to measure broad data points in real time and optimize their operations based on their analysis of this data. The move to smart connected devices can become expensive if expensive components must be upgraded across the infrastructure. This blog post explores how AWS IoT can be used to gather remote sensor telemetry and control legacy non-IP devices through remote infrared (IR) commands over the Internet.

In agriculture, greenhouses are used to create ideal growing conditions to maximize yield. Smart devices allow metrics like light level, temperature, humidity, and wind to be captured not just for historical purposes, but to react quickly to a changing environment. The example used in this blog post involves gathering light readings and activating an infrared-controlled sun shade in the greenhouse based on the current illuminance levels. A lux sensor will be placed directly under the area that we are going to control. Readings will be captured on a minute-by-minute basis. For more complex implementations, you can configure additional sensors and control devices.

Solution Architecture


AWS solution architecture


The IoT implementation has the following features:

  • Measures and transmits telemetry once a minute.
  • Uses TLS encryption to send telemetry.
  • Monitors telemetry and issues alarms when thresholds are exceeded.
  • Event notifications are delivered to mobile device through SMS messages.
  • IR commands over Ethernet are sent to operate the greenhouse controls.
  • Telemetry is logged for reporting purposes.

The implementation includes the following hardware components:

We’re using the MQTT protocol because it is a lightweight yet reliable mechanism for sending telemetry. You can access other AWS services through IoT actions. In this implementation, we used actions to integrate with Amazon CloudWatch and Amazon DynamoDB. CloudWatch logs the telemetry and then raises an alarm if a threshold is breached. Amazon SNS invokes a Lambda function, which sends the IR command in an SNS topic to the remote devices. DynamoDB is used as a long-term, durable store of historic telemetry for reporting purposes.

AWS Services Setup

This implementation uses several AWS services to create an end-to-end application to monitor and control greenhouse devices. In addition to the configuration of each service, we also need to create the roles and policies that will allow these services to work together.


We use IAM roles to provide the appropriate amount of access to the AWS services.

Create the CloudWatch role

Step 1. Create the CloudWatch Events role for AWS IoT to use.

Copy and paste the following into a file named


This document contains a policy that will ensure that the


role we create in the next step can assume this role.

  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {"Service": ""},
    "Action": "sts:AssumeRole"

Step 2. Create an IAM role named aws_iot_cloudwatchMetric.

This is the identity used by the AWS IoT action to send telemetry to CloudWatch.

From the command line, run the following command.

aws iam create-role --role-name aws_iot_cloudwatchMetric --

Upon successful execution of this command, an ARN for this role will be returned. Make a note of the ARN for the


You will need it during the IoT action setup.

Step 3. Create a policy document named


It will allow the


role to access Amazon CloudWatch.

    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "cloudwatch:PutMetricData",
        "Resource": [

Step 4. Attach


to the



aws iam put-role-policy --role-name aws_iot_cloudwatchMetric --
policy-name aws_iot_cloudwatch_access --policy-document 

Create the Lambda role

Now we’ll create a second role that will allow AWS Lambda to execute our function.

Step 1. Copy and paste the following to a file named aws_lambda_role_policy_document.json.

This document contains a policy that will allow AWS Lambda to assume the role we will create in the next step.

   "Version": "2012-10-17",
   "Statement": {
     "Effect": "Allow",
     "Principal": {"Service": ""},
     "Action": "sts:AssumeRole"

Step 2. Create an IAM role named aws_lambda_execution.

This is the identity used by Lambda to execute the function.

aws iam create-role --role-name aws_lambda_execution --assume-
role-policy-document file://aws_lambda_role_policy_document.json

Upon successful execution of this command, an ARN for this role will be returned. Make a note of the ARN for the


role. You will need it during the Lambda setup.

Step 3. Create the policy document named aws_lambda_execution.json

that will allow the


role to put events into CloudWatch.

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:logs:*:*:*"

Step 4. Attach the


to the



aws iam put-role-policy --role-name aws_lambda_execution --
policy-name aws_iot_lambda_access --policy-document 

Create the DynamoDB role

In order to store the telemetry to a persistent data store, we will create a role for AWS IoT to use.

Step 1. Create the Lambda execution policy document. Copy and paste the following to a file named aws_dynamodb_role_policy_document.json.

This document contains a policy that will allow DynamoDB to assume this role.

   "Version": "2012-10-17",
   "Statement": {
     "Effect": "Allow",
     "Principal": {"Service": ""},
     "Action": "sts:AssumeRole"

Step 2. Create an IAM role named aws_iot_dynamoDB.

This is the identity used by AWS IoT to send telemetry to DynamoDB.

aws iam create-role --role-name aws_iot_dynamoDB --assume-role-
policy-document file://aws_iot_dynamoDB_role_policy_document.json

Upon successful execution of this command, an ARN for this role will be returned. Make a note of the ARN for the


role. You will need it during the DynamoDB setup.

Step 3. Create a policy document named aws_iot_dynamoDB.json

that will allow the


role to execute.

   "Version": "2012-10-17",
   "Statement": {
     "Effect": "Allow",
     "Action": "dynamodb:PutItem",
     "Resource": "arn:aws:dynamodb:us-east-1:000000000000:table/IoTSensor"

Step 4. Attach


to the



aws iam put-role-policy --role-name aws_iot_dynamoDB --policy-
name aws_iot_dynamoDB_access --policy-document 

Now that the IAM roles and policies are in place, we can configure AWS IoT and the associated rules.

Set up AWS IoT

Let’s set up AWS IoT as the entry point for device communications. As soon as AWS IoT is communicating with the greenhouse sensors, we will use the AWS IoT rules engine to take further action on the sensor telemetry. The AWS IoT rules engine makes it easy to create highly scalable solutions that integrate with other AWS services, such as DynamoDB, CloudWatch, SNS, Lambda, Amazon Kinesis, Amazon ElasticSearch Service, Amazon S3, and Amazon SQS.

Create a thing

From the AWS CLI, follow these steps to create a thing.

Step 1. Create a thing that represents the lux meter.

aws iot create-thing --thing-name "greenhouse_lux_probe_1"

Step 2. Create the policy.

Start by creating a JSON policy document. It will be linked to the

create policy

statement. Copy and paste the following into a document. Be sure to replace 000000000000 with your AWS account number.

   "Version": "2012-10-17",
   "Statement": [
       "Effect": "Allow",
       "Action": [
       "Resource": [
        "Effect": "Allow",
        "Action": [
        "Resource": [

Now, run the following command to create the policy. Be sure to include the full path to the policy document.

aws iot create-policy --policy-name "greenhouse_lux_policy" --
policy-document file://iot_greenhouse_lux_probe_policy.json

Step 3. Create a certificate.

Creating a certificate pair is a simple process when you use the AWS IoT CLI. Use the following command to create the certificate, mark it as active, and then save the keys to the local file system. These keys will be required for authentication between the thing and AWS IoT.

aws iot create-keys-and-certificate --set-as-active --
certificate-pem-outfile IoTCert.pem.crt --public-key-outfile 
publicKey.pem.key --private-key-outfile privateKey.pem.key

Step 4. Attach the thing and policy to the certificate.
Using the following as an example, replace 000000000000 with your AWS account number and 22222222222222222222222222222222222222222222 with your certificate ARN. This will attach the thing to the certificate.

aws iot attach-thing-principal –thing-name
greenhouse_lux_probe_1 –principal arn:aws:iot:us-east-

Now, attach the policy to the certificate.

aws iot attach-principal-policy --policy-name 
greenhouse_lux_policy --principal arn:aws:iot:us-east-

Now that you have created a thing, policy, and certificate, you might also want to test connectivity to AWS IoT using a program like aws-iot-elf, which is available from the AWS Labs Github repo. After you have confirmed connectivity, you can build out the remainder of the application pipeline.

Configure the AWS IoT rules engine

Creating rules is an extremely powerful and straightforward way to build a responsive, extensible architecture. In this example, we will record and respond to telemetry as fast as we can record and report it. Letís imagine we need to ensure that the crop is not exposed to light intensity greater than 35,000 lux. First, we will integrate AWS IoT with CloudWatch, so it can be used to decide what to do based on the received telemetry. Two rules are required to support this case: one rule called TooBright and a second rule called NotTooBright.

Step 1. Create a JSON file named create-TooBright-rule.json

with the following content to serve as the rule policy. Be sure to use your AWS account number and the ARN for the



   "sql": "SELECT * FROM '/topic/Greenhouse/LuxSensors' WHERE 
lux > 35000",
   "description": "Sends telemetry above 35,000 lux to 
CloudWatch to generate an alert",
   "actions": [
       "cloudwatchMetric": {
           "metricUnit" : "Count",
           "metricValue" : "1",
           "metricNamespace" : "Greenhouse Lux Sensors",
           "metricName" : "ShadePosition"
    "awsIotSqlVersion": "2016-03-23",
    "ruleDisabled": false

Step 2. From the command line, run this command to create the rule.

aws iot create-topic-rule --rule-name TooBright --topic-rule-
payload file://create-TooBright-rule.json

Step 3. Create a JSON file named create-NotTooBright-rule.json

with the following content to serve as the rule policy. Be sure to use the AWS account number and ARN for the


role that you created earlier. Change the WHERE clause to < 35000 and the metricValue to 0.

   "sql": "SELECT * FROM '/topic/Greenhouse/LuxSensors' WHERE 
lux < 35000",
   "description": "Sends telemetry above 35,000 lux to 
CloudWatch to generate an alert",
   "actions": [
       "cloudwatchMetric": {
           "metricUnit" : "Count",
           "metricValue" : "0",
           "metricNamespace" : "Greenhouse Lux Sensors",
           "metricName" : "ShadePosition"
   "awsIotSqlVersion": "2016-03-23",
   "ruleDisabled": false

Step 4. From the command line, run this command to create the rule.

aws iot create-topic-rule --rule-name NotTooBright --topic-rule-
payload file://create-NotTooBright-rule.json

Set up SNS

We will configure SNS to invoke the Lambda function and deliver an SMS message to a mobile phone. The SMS notification functionality is useful for letting the greenhouse operations team know the system is actively monitoring and controlling the greenhouse devices. Setting up SNS for this purpose is a simple process.

Step 1. Create the SNS topic.

aws sns create-topic --name Sunshades

The SNS service returns the ARN of the topic.

    "TopicArn": "arn:aws:sns:us-east-1:000000000000:Sunshades"

Step 2. Using the topic ARN and a phone number where the SMS message should be sent, create a subscription.

aws sns subscribe --topic-arn arn:aws:sns:us-east-
1:000000000000:Sunshades --protocol SMS --notification-endpoint "1 555 555 5555"

The SNS service confirms the subscription ID.

    "SubscriptionArn": "arn:aws:sns:us-east-

Set up Lambda

We are going to use a Lambda function written in Python to make a socket connection to the remote Ethernet-to-IR device that controls the sun shade.

Step 1. Sign in to the AWS Management Console, and then open the AWS Lambda console. Choose the Create a Lambda function button.

Step 2. On the blueprint page, choose Configure triggers.

Step 3. On the Configure triggers page, choose SNS. From the SNS topic drop-down list, choose the Sunshades topic.

Step 4. Select the Enable trigger check box to allow the SNS topic to invoke the Lambda function, and then choose Next.

AWS Lambda Console screenshot


Step 6. On the Configure function page, type a name for your function (for example, Sunshade_Open).

Step 7. From the Runtime drop-down box, choose Python 2.7.

Step 8. Copy and paste the following Python code to create the Lambda functions that will open the sun shades. Be sure to use the IP address and port of the remote Ethernet-to-IR communication device. Include the IR code for your device as provided by the manufacturer.

You can get the IR code through the learning function of the IR repeater. This process typically requires sending an IR signal to the IR repeater so that it can capture and save the code as binary. The binary values for the IR command are then sent as part of the IP packet destined for the IR repeater.

Lambda function to open the sun shade

#Lambda function to extend the sunshade
#when the lux reading is too high
import socket
def lambda_handler(event, context):
HOST = ''# The remote host
PORT = 4998             # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
data = s.recv(1024)
print 'Received', repr(data)

In Role, choose an existing role. In Existing role, choose the


role you created earlier, and then choose Next.

AWS Lambda Python code


On the following page, review the configuration, and then choose Create function.

Choose the blue Test button and leave the default Hello World template as it is. Choose Save and Test to see if the function ran successfully. The Lambda function should have issued the remote IR command, so check to see if the sun shade device responded to the Lambda invocation. If the execution result is marked failed, review the logs on the test page to determine the root cause. If the Lambda function was successful but the sun shade did not move, double-check that you used the appropriate IR codes.

Now create the second Lambda function. ‘Sunshade_Close’ will be similar to ìSunshade_Open,’ except it will contain the IR code for closing the shade.

Set up CloudWatch

We send a metric value of either 0 or 1 from the AWS IoT action to CloudWatch to indicate whether the sun shade should be opened or closed. In this example, 0 indicates that the lux level is below 35,000 and the shades should be open. 1 indicates a higher lux level that requires the sun shades to be closed. Weíll have a problem if the power to the devices is cycled too frequently. Not only is this an inefficient way to control devices, it can also damage the equipment. For this reason, we will use CloudWatch alarms to set a threshold of 15 minutes to prevent devices from cycling between open and closed states too frequently. Each alarm will have triggers that respond to the value you put in the metric name type when you created the AWS IoT action.

The first alarm is called Trigger_SunShade_Open. This alarm will trigger when the variable ShadePosition value is greater or equal to 1 for 15 consecutive minutes. We will treat the ShadePosition value as a binary value where 1 indicates the lux is above the threshold and the sun shade should be opened. A value of 0 indicates that the sun shade should be closed. We define the period as a one-minute interval, which means the sun shade will change states no sooner than every 15 minutes. A second alarm called Trigger_SunShade_Close is created in the same way, except that the ShadePosition must be less than 1 for 15 minutes. Both alarms are configured with an action to send a notification to the appropriate SNS topic.

aws cloudwatch put-metric-alarm --alarm-name "Trigger_SunShade_Open" 
--namespace "Greenhouse Lux Sensors" --metric-name "ShadePosition" 
--statistic Sum --evaluation-periods "15" 
--comparison-operator "GreaterThanOrEqualToThreshold" 
--alarm-actions arn:aws:sns:us-east-1:000000000000:Sunshades 
--period "60" --threshold "1.0" --actions-enabled

Next, create the


alarm in a similar manner to


This alarm will trigger when the ShadePosition value is 1.

aws cloudwatch put-metric-alarm --alarm-name "Trigger_SunShade_Close" 
--namespace "Greenhouse Lux Sensors" --metric-name "ShadePosition" 
--statistic Sum --evaluation-periods "15" 
--comparison-operator "LessThanOrEqualToThreshold" 
--alarm-actions arn:aws:sns:us-east-1:000000000000:Sunshades 
--period "60" --threshold "0" --actions-enabled

Sign in to the AWS Management Console, open the CloudWatch console, and then look at the alarms.

Confirm the two alarms were created. Because of the 15-minute evaluation period, you need to wait 15 minutes to verify the alarms are working.

AWS CloudWatch Alarms showing insufficent data

Depending on the reported value of the ShadePosition variable, the state displayed for one alarm should be OK and the other should be ALARM.

After 15 minutes, we see the


alarm is in the OK state, which means the alarm has not been raised and therefore the sun shade should be not closed.

AWS CloudWatch Alarm screenshot



is in an ALARM state, which indicates the sun shade should be open.

AWS CloudWatch Alarm State

This alarm state should also have generated an SMS message to the mobile device that was configured in the SNS topic.

Set up DynamoDB

DynamoDB is the repository for the historical lux readings because of its ease of management, low operating costs, and reliability. We’ll use an AWS IoT action to stream telemetry directly to DynamoDB. To get started, create a new DynamoDB table.

aws dynamodb create-table --table-name Greenhouse_Lux_Sensor --
attribute-definitions AttributeName=item,AttributeType=S 
AttributeName=timestamp,AttributeType=S --key-schema 
AttributeName=timestamp,KeyType=RANGE --provisioned-throughput 

DynamoDB will return a description of the table to confirm it was created.

AWS IoT DynamoDB Action

Step 1. Create a JSON file named create-dynamoDB-rule.json

with the following content to serve as the rule policy. Use your AWS account number and the ARN for the


role you created earlier.

   "sql": "SELECT * FROM '/topic/Greenhouse/LuxSensors/#'",
   "ruleDisabled": false,
   "awsIotSqlVersion": "2016-03-23",
   "actions": [{
       "dynamoDB": {
           "tableName": "Greenhouse_Lux_Sensor",
           "hashKeyField": "item",
           "hashKeyValue": "${Thing}",
           "rangeKeyField": "timestamp",
           "rangeKeyValue": "${timestamp()}"

Step 2. From the command line, run this command to create the rule.

aws iot create-topic-rule --rule-name Lux_telemetry_to_DynamoDB -
-topic-rule-payload file://crate-dynamoDB-rule.json

Execute this command to verify that telemetry is successfully being sent to DynamoDB.

aws dynamodb scan --table-name Greenhouse_Lux_Sensor --return-
consumed-capacity TOTAL

This command will scan the DynamoDB table and return any data that was written to it. In addition, it will return a ScannedCount with the number of objects in the table. If the ScannedCount is 0, make sure that telemetry is being sent to and received by AWS IoT.


You now have a fully functional AWS IoT implementation that provides intelligent control of not-so-smart devices. You have also created a completely serverless solution that can serve a single device or billions of them, all without changing the underlying architecture. Lastly, charges for the services used in this implementation are based on consumption, which yields a very low TCO.

There are infinite uses for AWS IoT when you combine its cloud logic with the devices and sensors on the market. This post has shown the power of this AWS service can be extended to non-IP devices, which can now be managed and controlled as if they were designed for IoT applications.

Access Cross Account Resources Using the AWS IoT Rules Engine

The AWS IoT platform enables you to connect your internet-enabled devices to the AWS cloud via MQTT/HTTP/Websockets protocol. Once connected, the devices can send data to MQTT topic(s). Data ingested on MQTT topics can be routed into AWS services (like Amazon S3, Amazon SQS, Amazon DynamoDB, Amazon Lambda etc.), by configuring rules in AWS IoT Rules Engine.

This blog post explains how to set up rules for cross-account data ingestion, from an MQTT topic in one account, to a destination in another account. We will focus on the cross-account access from an MQTT topic (the source) to Lambda and SQS (the destinations).

The blog has been written with the assumption that you are familiar with AWS IoT and the Rules Engine, and have a fair understanding of AWS IAM concepts like users, role and resource-based permission.

We are going to use the AWS CLI to setup cross-account rules. If you don’t have AWS CLI installed, you can follow these steps. If you have the AWS CLI installed, make sure you are using the most recent version.

Why do you need cross-account access via rules engine?

Rules with cross-account access allow you to ingest data published on an MQTT topic in one account to a destination (S3, SQS etc.) in another account. For example, Weather Corp collects weather data using its network of sensors and then publishes that data on MQTT topics in its AWS account. Now, if Weather Corp wishes to publish this data to an Amazon SQS queue of its partner, Forecast Corp’s, AWS account, they can do so by enabling cross-account access via the AWS IoT Rules Engine.

How can you configure a cross-account rule?

Cross-account rules can be configured using the resource-based permissions on the destination resource.

Thus, for Weather Corp to create a rule in their account to ingest weather data into an Amazon SQS queue in Forecast Corp’s AWS account, the cross account access can be set up by means of the two step method stated below:


  1. Forecast Corp creates a resource policy on their Amazon SQS queue, allowing Weather Corp’s AWS account to sqs:SendMessage action.
  2. Weather Corp configures a rule with the Forecast Corp queue URL as its destination.

Note: Cross-account access, via AWS IoT Rules Engine, needs resource-based permissions. Hence, only destinations that support resource-based permission can be enabled for the cross-account access via AWS IoT Rules Engine. Following is the list of such destinations:

Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Storage Service (S3)
AWS Lambda

Configure a cross-account rule

In this section, configuration of a cross account rule to access an AWS Lambda function and Amazon SQS queue in a different account has been explained. We used the AWS CLI for this configuration.

Steps to configure a cross-account rule for AWS Lambda is different when compared to other AWS services that support resource policy.

For Lambda:

The AWS IoT Rules Engine, mandatorily requires resource-based policy to access Lambda functions; so a cross-account Lambda function invocation is configured just like any other IoT-Lambda rule. The process of enabling cross-account access for Lambda can be understood from the following example:

Assume that Weather Corp, using AWS account# 123456789012, wishes to trigger a Lambda function (LambdaForWeatherCorp) in Forecast Corp’s account (AWS account# 987654321012) via the Rules Engine. Further, Weather Corp wishes to trigger this rule when a message arrives on Weather/Corp/Temperature MQTT topic.

To do this, Weather Corp would need to create a rule (WeatherCorpRule) which will be attached to Weather/Corp/Temperature topic. To create this rule, Weather Corp would need to call the CreateTopicRule API. Here is an example of this API call via AWS CLI:

aws iot create-topic-rule --rule-name WeatherCorpRule --topic-rule-payload file://./lambdaRule

Contents of the lambdaRule file:

       "sql": "SELECT * FROM 'Weather/Corp/Temperature'", 
       "ruleDisabled": false, 
       "actions": [{
           "lambda": {
               "functionArn": "arn:aws:lambda:us-east-1:987654321012:function:LambdaForWeatherCorp"   //Cross account lambda

Forecast Corp will also have to give the AWS IoT Rules Engine permission to trigger LambdaForWeatherCorp Lambda function. Also, it is very important for Forecast Corp to make sure that only the AWS IoT Rules Engine is able to trigger the Lambda function and that it is done so only on behalf of Weather Corp’s WeatherCorpRule (created above) rule.

To do this, Forecast Corp would need to use Lambda’s AddPermission API. Here is an example of this API call via AWS CLI:

aws lambda add-permission --function-name LambdaForWeatherCorp --region us-east-1 --principal --source-arn arn:aws:iot:us-east-1:123456789012:rule/WeatherCorpRule --source-account 123456789012 --statement-id "unique_id" --action "lambda:InvokeFunction"

–principal: This field gives permission to AWS IoT (represented by to call the Lambda function.

–source-arn: This field makes sure that only arn:aws:iot:us-east-1:123456789012:rule/WeatherCorpRule rule in AWS IoT triggers this Lambda (no other rule in the same or different account can trigger this Lambda).

–source-account: This field makes sure that AWS IoT triggers this Lambda function only on behalf of 123456789012 account.

Note: To run the above command, IAM user/role should have permission to lambda:AddPermission action.

For Other Services

As of today, the Rules Engine does not use resource policy to access non-Lambda AWS resources (Amazon SQS, Amazon S3, Amazon SNS ). Instead, it uses IAM role to access these resources in an account. Additionally, AWS IoT rules can only be configured with roles from the same account. This implies, that a rule cannot be created in one account that uses a role from another account.

While, a role from another account cannot be used in a rule, a role can be set up in an account to access resources in another account. Also, for a cross-account role to work, you need a resource policy on the resource that has to be accessed across the account.

The process of rule creation with access to cross-account resources can be understood from the below example:

Let’s assume that Weather Corp, using AWS account# 123456789012, wishes to send some data to Amazon SQS (SqsForWeatherCorp) in Forecast Corp’s account (AWS account# 987654321012) via rules engine. If Weather Corp wishes to trigger this rule when a message arrives on Weather/Corp/Temperature MQTT topic.

To do this, Weather Corp would need to do the following things:

Step 1: Create an IAM policy (PolicyWeatherCorp) that defines cross-account access to SqsForWeatherCorp SQS queue. To do this, Weather Corp would need to call IAM’s CreatePolicy API. Here is an example of this API call via AWS CLI:

aws iam create-policy --policy-name PolicyWeatherCorp --policy-document file://./crossAccountSQSPolicy

Where the contents of crossAccountSQSPolicy file are below:

   "Version": "2012-10-17",
   "Statement": [
           "Sid": “unique”,
           "Effect": "Allow",
           "Action": [
           "Resource": [
               "arn:aws:sqs:us-east-1:987654321012:SqsForWeatherCorp" //Cross account SQS queue

Step 2: Create a role (RoleWeatherCorp) that defines as a trusted entity. To do this Weather Corp would need to call IAM’s CreateRole API. Here is an example of this API call via AWS CLI:


aws iam create-role --role-name RoleWeatherCorp  --assume-role-policy-document file://./roleTrustPolicy

Where the contents of roleTrustPolicy file are below:

 "Version": "2012-10-17",
 "Statement": [
     "Sid": "",
     "Effect": "Allow",
     "Principal": {
       "Service": ""
     "Action": "sts:AssumeRole"

Step 3: Attach policy to role. To do this, Weather Corp would need to call AttachRolePolicy API. Here is an example of this API call via AWS CLI:

aws iam attach-role-policy --role-name RoleWeatherCorp --policy-arn  arn:aws:iam::123456789012:policy/PolicyWeatherCorp

Step 4: Create a rule (WeatherCorpRule) that is attached to Weather/Corp/Temperature topic. To create this rule, Weather Corp would need to call CreateRule API. Here is an example of this API call via AWS CLI:

aws iot create-topic-rule --rule-name WeatherCorpRule --topic-rule-payload file://./sqsRule

Where the contents of sqsRule file are below:

       "sql": "SELECT * FROM 'Weather/Corp/Temperature'", 
       "ruleDisabled": false, 
       "actions": [{
           "sqs": {
               "queueUrl": "",
               "roleArn": "arn:aws:iam::123456789012:role/RoleWeatherCorp”, 
               "useBase64": false

Note: To run the above command, IAM user/role should have permission to iot:CreateTopicRule with rule arn as resource. Also, it needs to have permission to iam:PassRole action with resource as role arn.

Further, Forecast Corp would need to give permission on SqsForWeatherCorp to Weather Corp’s account, using resource policy. This can be done using SQS’s add-permission API. Here is an example of this API call via AWS CLI:

aws sqs add-permission --queue-url --label SendMessagesToMyQueue --aws-account-ids 123456789012 --actions SendMessage

It is important to note, that by adding this resource policy, Forecast Corp not only allows AWS IoT rules engine to send messages to SqsForWeatherCorp, but also permits all users/roles in Weather Corp’s account (which have the policy to allow sqs:SendMessage to SqsForWeatherCorp) to send messages to SqsForWeatherCorp.

Once the above setup is done, all messages sent to Weather/Corp/Temperature (which is in WeatherCorp’s account) will be sent to SqsForWeatherCorp (which is in Forecast Corp’s account) using the rules engine.


In this blog, the process of creating AWS IoT rules with cross account destination has been explained. With the help of simple case scenarios, the process of creating rules for Lambda and SQS destinations, using AWS CLI, has been detailed in a step by step manner.

We hope you found this walkthrough useful. Feel free to leave your feedback in the comments.

Identify APN Partners to Help You Build Innovative IoT Solutions on AWS

AWS provides essential building blocks to help virtually any company build and deploy an Internet of Things (IoT) solution.  Building on AWS, you have access to a broad array of services including AWS IoT, a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices, and low-cost data storage with high durability and availability for backup, archiving, and disaster recovery options to meet virtually an infinite number of scenarios and use cases. For example, Amazon S3 provides scalable storage in the cloud, Amazon Glacier provides low cost archive storage, and AWS Snowball enables large volume data transfers.  No solution is complete without information being generated from the system data collected. Here, you can utilize Amazon Machine Learning for predictive capabilities which can enable you to gain business insights from the data you’ve collected. We strive to offer services commonly used to build solutions today, and regularly release new services purposely built to help you meet your IoT business needs today and in the future.

We are currently witnessing a major shift in how customers view their business. Customers across industries, including Financial Services, Manufacturing, Energy, Transportation, Industrial and Banking, are on a business transformation journey and are seeking guidance to help transform from product-centric to service-orientated companies, taking advantage of actionable insights they can drive through IoT.  Early adopters have already deployed a wide range of cloud-based IoT solutions, and many are seeking to optimize existing solutions. Some companies are just getting started.  Regardless of where your company is in your IoT journey, working with industry-leading AWS Partner Network (APN) Partners who offer value-added services and solutions on AWS can help you accelerate your success.

Today, we launched the AWS IoT Competency to help you easily connect to APN Partners with proven expertise and customer success to help meet your specific business needs.

What’s the value of the AWS IoT Competency for your firm?

The IoT value chain is complex, and has many “actors.” Successful IoT implementations require services and technologies not traditionally part of the Enterprise DNA. As you seek to find best-in-breed partners for your specific needs, whether they be identifying edge or gateway devices or software, a platform to acquire, analyze, and act on IoT data, connectivity for edge and gateway devices, or consulting services to help you architect and deploy your solution, we want to make sure we help you easily connect with Consulting and Technology Partners who can help.

APN Partners who have achieved the AWS IoT Competency have been vetted by AWS solutions architects, and have passed a high bar of requirements such as providing evidence of deep technical and consulting expertise helping enterprises adopt, develop, and deploy complex IoT projects and solutions. IoT Competency Partners provide proven technology and/or implementation capabilities for a variety of use cases including (though not limited to) intelligent factories, smart cities, energy, automotive, transportation, and healthcare.  Lastly, public customer references and proven customer success are a core requirement for any APN Partner to achieve the AWS IoT Competency.

Use Cases and Launch Partners

Congratulations to our launch IoT Technology Competency Partners in the following categories:

Edge:  Partners who provide hardware and software ingredients used to build IoT devices, or finished products used in IoT solutions or applications.  Examples include: sensors, microprocessors and microcontrollers, operating systems, secure communication modules, evaluation and demo kits.

  • Intel
  • Microchip Technology

Gateway: Partners who provide data aggregation hardware and/or software connecting edge devices to the cloud and providing on premise intelligence as well as connecting to enterprise IT systems.  Examples include hardware gateways, software components to translate protocols, and platforms running on-premises to support local decision making.

  • MachineShop

Platform Providers: Independent software vendors (ISVs) who’ve developed a cloud-based platform to acquire, analyze, and act on IoT data. Examples include device management systems, visualization tools, predictive maintenance applications, data analytics, and machine learning software.

  • Bsquare Corporation
  • C3 IoT
  • Splunk
  • PTC
  • Thinglogix

Connectivity: Partners who provide systems to manage wide-area connectivity for edge and gateway devices.  Examples include device and subscription management platforms, billing and rating systems, device provisioning systems, and Mobile Network Operators (MNOs) and Mobile Virtual Network Operators (MVNOs)

  • Amdocs, Inc.
  • Asavie
  • Eseye

Congratulations to our launch IoT Consulting Competency Partners!

  • Accenture
  • Aricent
  • Cloud Technology Partners
  • Mobiquity, Inc.
  • Luxoft
  • Solstice
  • Sturdy
  • Trek10

Learn More

Hear from two of our launch AWS IoT Competency Partners, MachineShop and C3 IoT, as they discuss why they work with AWS, and the value of the AWS IoT Competency for customers:

C3 IoT:


Want to learn more about the different IoT Partner Solutions? Click here.

Improved AWS IoT Management Console

For many customers, the management console is the primary tool for interacting with and monitoring AWS IoT. This includes connecting a first device, diving into thing details, finding key resources, and testing with the MQTT client. Over the past year, we received feedback from customers that drove the redesign of a new AWS IoT console available today.

In the new console, you will see:

  • New visual design for improved usability and navigation
  • Improved navigation to things, types, certificates, policies, and rules, making them easier to find
  • A new dashboard with account-level metrics
  • Streamlined MQTT web client to troubleshoot your IoT solutions
  • A new wizard to connect devices in fewer steps
  • A real-time feed of things’ lifecycle events and shadow activity

To try the new console experience, sign in to the console.

Your feedback is important to us as we continue to improve the AWS IoT console experience. To send feedback, please use the Feedback button in the footer of the console.

Fig 1 – New dashboard with account-level metrics.

Fig 2 – Things, types, certificates, policies, and rules all have their own areas.

Fig 3 – Drill in to resource details and take action.

How to Bridge Mosquitto MQTT Broker to AWS IoT

You can connect securely millions of objects to AWS IoT using our AWS SDKs or the AWS IoT Device SDKs. In the context of industrial IoT, objects are usually connected to a gateway for multiple reasons: sensors can be very constrained and not able to directly connect to the cloud, sensors are only capable of using other protocols than MQTT or you might might need to perform analytics and processing locally on the gateway.

One feature of local MQTT broker is called ‘Bridge’ and will enable you to connect your local MQTT broker to AWS IoT so they can exchange MQTT messages. This will enable your objects to communicate in a bi-directional fashion with AWS IoT and benefit from the power of the AWS Cloud.

In this article we are going to explain use cases where this feature can be very useful and show you how to implement it.

Why Bridge your MQTT Broker to AWS IoT

Security is paramount in IoT and the AWS IoT broker has a high level of security built-in to authenticate and authorize devices base on standards like TLS 1.2 with client certificates.

If you have legacy IoT deployments, you might already have objects connected to an MQTT broker using other authentication mechanism like username and passwords. Your MQTT broker can be very close to where your sensors are deployed (local MQTT broker) or in a remote location like the Cloud.

If you plan to upgrade your current security standards to match those of AWS IoT but want to benefit from the scalability and Rule Engine of AWS IoT today, you can bridge your legacy MQTT broker to AWS IoT. This represents an easy transient solution that you can deploy quickly without having to wait for your current system’s upgrade. Scaling beyond a single broker is not in the scope of this post, we will focus on the bridging feature of Mosquitto MQTT Broker.

Open source MQTT broker like Mosquitto can be installed on many operating systems like Linux for example. For those wishing to deploy a local gateway quickly without developing extra code to send data to AWS IoT, installing Mosquitto on a local device can represent an attractive solution as well as you will benefit locally from Mosquitto boker’s features (persist messages locally, log activity locally, …).


How to Install Mosquitto MQTT Broker

The first step will be to install Mosquitto broker on your device/virtual machine, you can go to Mosquitto download page for instructions.

Typically, you should install this on your local gateway. Mosquitto supports a wide range of platforms including many distributions of Linux. Therefore, you can run your local gateway on low powered devices as well as on a full-fledged server/virtual machine.

In our case we will install Mosquitto on an EC2 Amazon Linux instance which would be equivalent to having a local gateway running a Linux distribution.

If you are not planning on using an Amazon EC2 Instance you can skip to the section “How to configure the bridge to AWS IoT”

Launching and Configuring the EC2 Instance

Before launching an EC2 Amazon Linux instance to host the Mosquitto broker, we are going to create an IAM Role so we’ll be able to use the CLI on the instance to create keys and certificate in AWS IoT for the bridge.

  1. Go to the AWS Web Console and access the IAM service (Fig. 1)
  2. Click on Roles
  3. Click on Create New Role (Fig. 2)
  4. Name the role AWSIoTConfigAccess (Fig. 3)
  5. Click Next Step
  6. Select Amazon EC2 (Fig. 4)
  7. Filter with the value AWSIoTConfigAccess (Fig. 5)
  8. Select the policy AWSIoTConfigAccess and click on Next Step
  9. Review the Role and click on Create Role (Fig. 6)
  10. Now that the Role has been created you can go to Amazon EC2. Choose a region, preferably where AWS IoT is available, in this article I am using Frankfurt.
  11. Click on Launch Instance.
  12. Select Amazon Linux AMI 2016.03.1 (Fig. 7)
  13. Select the t2.micro instance type (Fig. 8)
  14. Click on Next: Configure Instance Details
  15. In the IAM Role, select AWSIoTConfigAccess (Fig. 9)
  16. Leave default parameters as shown in the picture and click on Next: Add Storage
  17. Leave everything as is and click on Next: Tag Instance
  18. Give a name to your instance ‘MosquittoBroker’
  19. Click on Next: Configure Security Groups
  20. Create a new security group (Fig. 10)
  21. Review and launch the EC2 instance
  22. Follow instructions to connect to the EC2 instance once it is running.
  23. Once logged in type the following commands:
#Update the list of repositories with one containing Mosquitto
sudo wget -O /etc/yum.repos.d/mqtt.repo
#Install Mosquitto broker and Mosquitto command line tools
sudo yum install mosquitto mosquitto-clients

How to Configure the Bridge to AWS IoT

Now that we have installed Mosquitto onto our EC2 instance (or local gateway), we will need to configure the bridge so that the Mosquitto broker can create a connection to AWS IoT. We will first use the AWS CLI to create the necessary resources on AWS IoT side.

Enter the following commands in your terminal:

#Configure the CLI with your region, leave access/private keys blank
aws configure

#Create an IAM policy for the bridge
aws iot create-policy --policy-name bridge --policy-document '{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Action": "iot:*","Resource": "*"}]}'

#Place yourself in Mosquitto directory
#And create certificates and keys, note the certificate ARN
cd /etc/mosquitto/certs/
sudo aws iot create-keys-and-certificate --set-as-active --certificate-pem-outfile cert.crt --private-key-outfile private.key --public-key-outfile public.key --region eu-central-1

#List the certificate and copy the ARN in the form of
# arn:aws:iot:eu-central-1:0123456789:cert/xyzxyz
aws iot list-certificates

#Attach the policy to your certificate
aws iot attach-principal-policy --policy-name bridge --principal <ARN_OF_CERTIFICATE>

#Add read permissions to private key and client cert
sudo chmod 644 private.key
sudo chmod 644 cert.crt

#Download root CA certificate
sudo wget -O rootCA.pem

We now have a client certificate for our bridge, this certificate is associated with an IAM policy that will give all permissions to the bridge (this policy must be restricted for your usage). The bridge will have everything it needs to connect, we just need to edit the configuration file with our specific parameters for Mosquitto.

#Create the configuration file
sudo nano /etc/mosquitto/conf.d/bridge.conf

Edit the following by replacing the value address with your own AWS IoT endpoint. You can use the AWS CLI to find it with ‘aws iot describe-endpoint’ as mentioned below. Then copy the content and paste it in the nano editor, finally save the file.

#Copy paste the following in the nano editor:
# =================================================================
# Bridges to AWS IOT
# =================================================================

# AWS IoT endpoint, use AWS CLI 'aws iot describe-endpoint'
connection awsiot

# Specifying which topics are bridged
topic awsiot_to_localgateway in 1
topic localgateway_to_awsiot out 1
topic both_directions both 1

# Setting protocol version explicitly
bridge_protocol_version mqttv311
bridge_insecure false

# Bridge connection name and MQTT client Id,
# enabling the connection automatically when the broker starts.
cleansession true
clientid bridgeawsiot
start_type automatic
notifications false
log_type all

# =================================================================
# Certificate based SSL/TLS support
# -----------------------------------------------------------------
#Path to the rootCA
bridge_cafile /etc/mosquitto/certs/rootCA.pem

# Path to the PEM encoded client certificate
bridge_certfile /etc/mosquitto/certs/cert.crt

# Path to the PEM encoded client private key
bridge_keyfile /etc/mosquitto/certs/private.key

Now we can start the Mosquitto broker with this new configuration:

#Starts Mosquitto in the background
sudo mosquitto -c /etc/mosquitto/conf.d/bridge.conf –d
#Enable Mosquitto to run at startup automatically
sudo chkconfig --level 345 scriptname on

Making Sure Everything is Working

The broker has now started and has already connected to AWS IoT in the background. In our configuration we have bridged 3 topics:

  • awsiot_to_localgateway: any message received by AWS IoT from this topic will be forwarded to the local gateway
  • localgateway_to_awsiot: any message received by the local gateway will be forwarded to AWS IoT
  • both_directions: any message received on this topic by one broker will be forwarded to the other broker


We will check that the topic localgateway_to_awsiot is working, feel free to check the whole configuration.

  • Go to the AWS IoT Console and click on MQTT Client
  • Click on Generate Client Id and Connect
  • Click on Subscribe to topic and enter localgateway_to_awsiot, click on Subscribe (Fig. 11)/>

Now that we have subscribed to this topic on AWS IoT side you can publish an MQTT message from your terminal (so from the local gateway) to see if it gets forwarded.

#Publish a message to the topic
mosquitto_pub -h localhost -p 1883 -q 1 -d -t localgateway_to_awsiot  -i clientid1 -m "{\"key\": \"helloFromLocalGateway\"}"

You should now get this message on your screen, delivered by AWS IoT thanks to the bridge.

If you are done testing with an Amazon EC2 Instance you can do this with your own local/remote MQTT broker!

Next Steps

The bridge between your local broker and AWS IoT is up and running, you might want to fine tune some parameters of the bridge connection. Please consult the Bridge section of the official Mosquitto documentation if you need additional details.

Now that your data is flowing through AWS IoT you can create new IoT applications using other AWS Services for Machine Learning, Analytics, Real-Time Dashboarding and much more so do not hesitate to read our blog, documentation and additional developer resources.