AWS Startups Blog

Internet of Things — Part 2a

By Todd Varland, Solutions Architect, AWS


Sensor data with Raspberry Pi and AWS, by two AWS Solutions Architects

“Let’s make some noise!”

The conference room, packed with several hundred people, erupts with clapping and cheering. At the front of the room, two huge monitors display the noise visually—a JavaScript chart displaying a Kinesis data stream being fed by a simple Raspberry Pi audio sensor. The makers in the room cheer louder.

The venue was the AWS Summit in San Francisco in March 2014. This blog post will outline how this near real-time visualization was built with Raspberry Pi, Amazon Kinesis, and a few other pieces of hardware.

As discussed in Part 1 of this series, Low Performance Computing (LPC) platforms such as Arduino and the Raspberry Pi are increasingly popular for purpose-specific solutions. A common use case for these LPC platforms is to combine low cost, communications capabilities and the ability to connect sensors to a streaming data solution such as AWS Kinesis.

For the demo described above, the data collection and data upload flow was as follows: Sound Sensor (Analog mic) -> Analog to Digital converter -> Raspberry Pi -> AWS SDK (boto) -> Kinesis

Build of Materials:

DFROBOT — Analog Sound Sensor — DFR0034
PCF8591 AD/DA Converter Module
• Raspberry Pi (Revision 2) (CanaKit Raspberry Pi Basic Kit from Amazon.com)
• An SD card (which acts as the hard drive; 16 GB works fine)
niceEshop Mini 150M USB WiFi Wireless LAN 802.11 n/g/b Adapter
• WiFi drivers and AWS Python SDK installed on the Raspberry Pi
• A small Python script
• A prototyping breadboard and jumper wires to aid in connecting the parts together

The backend — AWS Services:

AWS Identity and Access Management
Amazon Kinesis
• Amazon Simple Storage Service (S3) static website hosting
AWS SDK for JavaScript in the Browser
Login with Amazon

In this post we’ll focus on the front-end (sensor considerations and implementation—getting the data collected and into AWS) while in the second post my AWS solutions architect colleague Brett will focus on the backend (Amazon S3, AWS JavaScript SDK, Login with Amazon, what’s happening in Amazon Kinesis and visualizing the data)

The Front End:

The first task is to set up the Raspberry Pi, which takes less than an hour. Just follow the quick start guide from the raspberrypi.org site (I used the Raspbian distro, which is based on Debian). Adding Wi-Fi connectivity is accomplished by attaching the Wi-Fi dongle listed above and following the setup guide found on the Raspberry Pi HQ Projects page. Note: Be sure to purchase a compatible Wi-Fi dongle.

The next tasks are to run Linux update and to install the development and communication software:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python-dev
sudo apt-get install python-rpi.gpio (the RPi to Python bridge)
sudo apt-get install python-setuptools
sudo easy_install pip
sudo pip install boto

Install the I2C drivers as explained at SK Pang electronics. The I2C drivers enable the analog-to-digital board to provide sample data at a rate higher than the standard digital input of the Raspberry Pi will allow.

The third task is to physically connect the analog-to-digital converter and sound sensor. To do this, connect the sound sensor to the analog to digital converter, since the Raspberry doesn’t have analog input. Of course power and ground, available from the Raspberry Pi, need to be supplied to both the sound sensor and the analog converter. You can see an illustration of the Raspberry Pi Revision 2 pin out at Megaleecher.net.

Analog sound sensor board:
• Red PIN connects to 5 volts DC from Raspberry Pi
• Black PIN connects to ground from Raspberry Pi
• Blue PIN connects to AIN0 on the analog-to-digital converter

PCF8591 AD/DA converter board:
• AIN0 connects to blue PIN on analog sound sensor
• SCL connects to I2C (SCL) from Raspberry Pi
• SDA connects to I2C (SDA) from Raspberry Pi
• GND connects to ground from Raspberry Pi
• VCC connects to 5 volts DC from Raspberry Pi
(Note: Remove the three red jumpers from the board; this will disable the onboard light and temp sensors.)

Tip: The breadboard and jumper wires listed in the Build of Materials section above are very useful in setting up these connections.

Then we update the Python code (listed below) that reads the GPIO pins SDA and SLC. This Python script is a command-line graphic display of ambient sound, or a kind of ASCII art equalizer display:

#sound_sensor.py
#Read a value from analogue input 0
#in A/D in the PCF8591P @ address 0x48
#collects values from 0 to 255
from smbus import SMBus
import time
bus = SMBus(1)
print(“Read the A/D”)
print(“Ctrl C to stop”)
bus.write_byte(0x48, 0) # set control register to read channel 0
last_reading =-1
while(True):
   reading = bus.read_byte(0x48)
   print str(reading) * reading

For the demo we used the AWS SDK for Python (aka boto) for the application logic. Boto helps take the complexity out of coding by providing Python APIs for many AWS services including Amazon S3, Amazon EC2, Amazon DynamoDB, and—in this case–Amazon Kinesis.

Now we’ll update the Python code on the Raspberry Pi to get ready to post records to Kinesis. Which looks something like this:

import boto
import json
from smbus import SMBus
bus = SMBus(1)
print(“Read the A/D and put record Kinesis”)
print(“Ctrl C to stop”)
bus.write_byte(0x48, 0)
last_reading =-1
kinesis = boto.connect_kinesis()
while(True):
   reading = bus.read_byte(0x48)
   record = json.dumps(
       {
           “values” : { “x”: reading},
           “key” : “rpi/sound_01"
       }
   )
put_records([record])
…more in post 2b…

Finally: What is Amazon Kinesis?

Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources, allowing easy creation of applications that process information in real time, from sources such as sensors, website click-streams, marketing and financial information, manufacturing instrumentation, social media, operational logs, and metering data.

For the AWS SF Summit demo we collected ambient sound in a single location (the conference room at Moscone Convention Center). A real-world use case for this project might be for a city or metro area that wants to understand sound levels (or noise pollution?) over time across a large geographic area by deploying hundreds, thousands, or more of these units, followed by analyzing the collected data for impact on citizens.

The ability for this particular Raspberry Pi unit to post data to this Amazon Kinesis stream was granted through an AWS Identity and Access Management user policy created by my colleague Brett Francis. In a future post, which will appear as Part 2b in this series, Brett will provide more detail around how posting data to the Amazon Kinesis stream was accomplished and more.

Read the final post in the series: Internet of Things — Part 2b