The Internet of Things on AWS – Official Blog

Before Developing Real Devices: Exploring a Business Outcome with Simulated Devices

Realize business outcomes with the IoT Device Simulator, AWS IoT Analytics, Amazon QuickSight, Microsoft Power BI, and Tableau

Customers often get wrapped up in slower hardware development cycles before they have real, potential business outcomes identified and agreed upon for their business. This post highlights how customers should consider using simulated devices to explore business outcomes in parallel to the actual development and engineering of the devices. Real devices can then be built and deployed to provide real instrumented data to materialize actual business outcomes.

This paradigm shift increases both collaboration opportunities across functional areas (business and engineering/IT) during development and deployments, as well as creates solid foundations to realize the full value proposition of IoT implementations. We describe this benefit as “devices can be quickly built to meet the Minimum Loveable Product” (MLP) which reinforces the focus on device telemetry when developing the actual device hardware and software.

Sequential IoT Implementations

Oftentimes, traditional IoT developments and implementations follow a prescriptive sequential pattern.

  1. Zoom-in on, design, or acquire IoT devices and perform field testing;
  2. Attach sensors to devices and perform functional testing;
  3. Develop and test IoT device software;
  4. Secure devices and connect them to AWS IoT Core (with or without AWS Greengrass);
  5. Define AWS policies for users and applications and attach to certificates;
  6. Perform data storage and basic analytics functions;
  7. Incorporate telemetry data into existing or new visualizations and reporting;
  8. Develop IoT operational reporting metrics (both KPIs for device operations and KPIs driven by the IoT solution);
  9. Evolve business integration capabilities and regularly communicate results;
  10. Continually ROI the IoT value proposition by making IoT data and analytics as the key starting points;

Typically, business teams and key stakeholders may not be able to assess how an IoT implementation is performing until devices have been deployed at scale. This approach can delay the adoption of IoT driven capabilities since 1) intrinsic business value can not be easily understood and communicated, 2) the time gap between physical device development and actual IoT data generation, capture, and use can dampen adoption enthusiasm, and 3) testing and validation of large scale IoT implementations can shift the focus away from analytics and possibly derail the overall IoT implementation success.

Iterative IoT Developments Start with Data and Analytics

With new capabilities driven by both analytics and device simulators there are options to actually begin developing IoT solution starting with simulated data in an iterative/agile fashion. The difference between data generated by simulators and data generated by actual devices can be modeled in such a way that they are indistinguishable. What kinds of data are best for a particular IoT implementation? How frequently is the data being generated? What kinds of visualizations are best to complement or enrich existing business intelligence functions?

A typical iterative IoT analytics development pattern would look like below.

  1. Simulate device data and iteratively fine-tune simulations with business, software, and engineering stakeholders. The main objective is to obtain data from device simulators almost identical to device generated data during production;
  2. Identify metadata needed to create analytics and visualizations. Some of it could be in S3 buckets;
  3. Document enrichment, transformation, and filtering of data;
  4. Prototype visualizations with available BI tools, such as Power BI, Tableau, or QuickSight.
  5. Iterate steps 1 through 4 until business and technical objectives are met;
  6. Create design documents, information models, and artifacts needed to develop, test, and deploy the solution into production;

IoT Migrations Have a Head Start with AWS IoT Analytics

Another powerful reason to use IoT device simulators is to help position migrations of existing IoT implementations to AWS IoT. Existing IoT implementations already generate data and using AWS IoT Core and AWS IoT Analytics may help organizations obtain a realistic representation of how AWS can help close some of the existing gaps of currently implemented analytics solutions. In addition, IoT simulations and visualizations can be done from a business perspective first and then possibly affect the migration strategies at the technical levels.

The underlying data plane (data model, data types, frequency, etc.) can remain the same as the current implementations while data ingestion and analytics cycles get additional innovation iterations using AWS out-of-the-box capabilities to integrate with existing BI tools and visualizations.

Overall Solution Architecture

Set Up the IoT Device Simulator

After getting an AWS account, the next pre-requisite for obtaining business value out of IoT solutions is to set up the IoT device simulator. The IoT Device Simulator is a solution deployed against your AWS account and uses a web interface to enable ‘soft’ devices to interact with the AWS IoT end-point of the region in your account.

After the IoT Device Simulator is deployed and you log on you should see a window similar to the one below.

Let’s set up a new device type, and under it let’s create new devices (widgets). All devices created under a device type generate data with the same structure and frequency defined during define type creation.

I’ve created a device type called tankfillsensor with the following JSON payload. The data transmission duration is 6000 seconds (or 6000000 milliseconds), with a frequency of 5 seconds (5000 milliseconds). The device type publishes data on MQTT topic liquidtank/fillpercentage.

  "deviceid": "799fc110-fee2-43b2-a6ed-a504fa77931a",
  "fillpercentage": 99.4,
  "batterystatus": "true",
  "datetime": "2018-02-15T21:50:18",
  "latlong": "{ 'latitude': 38.9072, 'longitude': 77.0369 }",
  "gridlocationid": 1000,
  "attribute1": "asdqwiei1238"

After navigating to Widgets on the left hand side, we can click on Add Widget and we add 10 devices of the same type, tankfillsensor. After completion, we start each of them and the device simulation has begun.

We navigate to AWS IoT Core and then Test and we subscribe to liquidtank/fillpercentage and we can see the data being published by devices defined. We notice that the device simulator actually adds a device id associated with your device (_id_), in case you forget to add one in your MQTT payload

We can now, engage the rule engine to send the data to an AWS IoT Analytics Channel. We create a new rule called IoTAnalytics_liquid_tank_fill and the SQL statement is as following.

SELECT *, parse_time("yyyy-MM-dd HH:mm:ss z", timestamp()) as ingest_dt FROM 'liquidtank/fillpercentage'

For an action we select to send the MQTT messages to an IoT Analytics Channel called liquid_tank_fill which we create at the same time we create the rule action.

We navigate to IoT Analytics and we can see that the liquid_tank_fill channel has the data retention set to an indefinite period.

We then click on Pipeline and create a new pipeline called pipeline_ltf and define a few actions and store the data in a data store called fillpercentage_ds1. We have added 2 activities to the pipeline: divide the fill percentage by 10 and create a new field to store the decimal value as fieldpercentage1=fieldpercentage/100 and batterystate = batterystatus+1.

Now, we have the data store created with 2 new additional attributes defined in the pipeline processing and we can create a data set using the data store just created.

On the IoT Analytics window click on Analyze and then Data Sets. Create a data set called fillpercentage_dataset1 with a very simple query as below. We can see that we have both fillpercentage1 and batterystate added in addition to the original incoming fields.

SELECT * FROM fillpercentage_ds1 where __dt >= current_date - interval '7' day

Schedule the data set to run every hour.

You can access the data set for reporting in Jupyter notebooks or directly in QuickSight. However, if you have Power BI or Tableau for your broader enterprise reporting needs, you may need to download the file and import it into your own enterprise BI applications.

An example of a visualization created using QuickSight can be seen below. In QuickSight, to connect to a data set from IoT Analytics, click on Manage Data and then New Data Set and scroll down to AWS IoT Analytics and then select fillpercentage_dataset1.


The raw data set can be accessed via the CSV on the left-hand side and the file can be downloaded for use in Power BI or Tableau. Another option is to use a Python script similar to the one below -executed hourly, with a 30 minute offset – and download the CSV data sets from AWS IoT Analytics.

The script below gets the CSV data set fillpercentage_dataset1 and saves it as filetoimport.csv. It is also saved to your own S3 bucket.

# -*- coding: latin_1 -*- 

import awscli
import os
import sys
import urllib
import boto3

# define where the aws cli command output will be stored
# define the AWS IoT Anaytics data set
filename = 'foobar.txt'
awsiotdataset = 'fillpercentage_dataset1'
s3bucket = 'iotidcv'
s3filename ='filetoimport_s3.csv' 

# build the AWS CLI command to get the URL of the dataset
command = 'aws iotanalytics get-dataset-content --dataset-name ' + awsiotdataset + ' > ' + filename

# execute the command

## utility functions for string manipulation
def left(s, amount):
    return s[:amount]

def right(s, amount):
    return s[-amount:]

def mid(s, offset, amount):
    return s[offset:offset+amount]


# function to save file to s3 bucket 
def save_to_s3(bucket,filename, content):
    client = boto3.client('s3')
    # can change the S3 key as you see fit
    k = "folder/subfolder/"+filename
    client.put_object(Bucket=bucket, Key=k, Body=content)

# open file
f = open(filename)

# read in file lines
lines = f.readlines()

# get the second line of the AWS CLI command line file output - 0 is the first line
linewithurl = lines[1]

# extract the pre-signed URL from the line
# extract the pre-signed URL of the S3 bucket where the AWS IoT dataset is stored in AWS
# url = linewithurl[-(len(linewithurl)-len('ENTRIES ')):]

# extract the pre-signed URL of the S3 bucket where the dataset is stored
url = right(linewithurl, len(linewithurl)-len('ENTRIES '))

# open up the presigned url and download the file as a CSV called filetoimport.csv
csvfile = urllib.URLopener()

csvfile.retrieve(url, "filetoimport.csv")

# optional, save the csv to your s3 bucket
# just for confirmation, show the presigned URL
print(url) + '\n URL length: ' + str(len(url)) + '. Total line length with URL in it is: ' + str(len(linewithurl)) + + '. IoT Data Set saved as filetoimport.csv'

# uncomment the line below---------------------------to activate s3 save
# change the name of the bucket to match yours at the top of the script
save_to_s3(s3bucket, s3filename, open('filetoimport.csv','rb'))

print('CSV file saved to S3 bucket name: ' + s3bucket + ', file name: folder/subfolder/' + s3filename + '.')
# ------------------------------------

The script above uses the aws cli, but you can easily change it to use the corresponding REST API, retrieve the JSON response, and access the URI (add import json as well as import requests at the top and retrieve the dataURI from the payload).

client = boto3.client('iotanalytics')

response = client.get_dataset_content(
data = reponse.json()
url = data([0],['dataURI'])

On the instance with Power BI installed I can now use the CSV file from my desktop.

A screen similar to the one below is presented and I can begin creating a visualization in Power BI within minutes.

This is an example of a Power BI visualization that shows the tank fill percentage values by datetime and device id based on the CSV file generated by AWS IoT Analytics.

The example below displays a Power BI visualization filtered for a particular device and with a trend line.

In Tableau I download the file from the S3 bucket and connect to it. I get a screen similar to the one below.

Tableau also can be used to connect to Amazon Athena.

And we can use the CSV data to create Tableau visualizations.

Here is a time series visual example that zooms-in on a particular device data.


This blog explores how AWS IoT Analytics is used as an active iterative and interactive platform around which IoT solutions can be quickly prototyped, developed, deployed, and monitored. Hard dependencies on device telemetry are eliminated by using the IoT Device Simulator and business outcomes are visualized on a variety of reporting tools from Amazon QuickSight to Microsoft Power BI and Tableau.

To summarize, AWS IoT Analytics together with IoT Device Simulator can be used both strategically (planning, ideation, and data mappings) as well as tactically. From the early stages of IoT application development to production, it helps to not only visualize IoT generated data but also to identify the potential sensors and metrics associated with each device and this approach directly informs stakeholders on key business metrics. Clean data sets generated by AWS IoT Analytics can be quickly integrated with existing Jupyter notebooks (Python), BI dashboards, enterprise visualization tools, and operational reporting. In addition, clean IoT data sets can be directly used for developing and operationalizing machine learning models using Amazon SageMaker.