Category: government


Strengthening Cyber Security Across the Department of Defense

A guest post by Gabriele McCormick, Lead Communications Specialist, Enlighten IT Consulting

Protecting U.S. cyber assets has become a top-level priority. In October, the Senate passed the Cybersecurity Information Sharing Act of 2015 designed to enhance cybersecurity threat information sharing between the U.S. government and the private sector. The Department of Defense (DoD) has been fighting against adversaries who’ve harnessed technology to attack the U.S. in ways no one could have dreamed up five years ago. To defend the DoD’s information networks, cyber analysts must comb through the vast, unstructured volume of DoD cyber defense data to detect, assess, and mitigate cyber threats and act quickly. To support this mission, Enlighten IT Consulting (EITC) in 2012 developed and deployed the Big Data Platform (BDP) for the Defense Information Systems Agency (DISA). The platform is used currently by mission partners across the DoD.

The BDP is a robust and scalable architecture capable of ingesting, storing, and visualizing multiple petabytes of cyber data. Its distributed data structures and streaming ingest capabilities provide storage and retrieval rates in the millions of records per second. EITC also developed and deployed a suite of cyber situational awareness analytics to the BDP, giving analysts tools for accelerated attack detection, diagnosis, and threat mitigation.

When hosted on AWS GovCloud (US), the components of the BDP and AWS GovCloud (US) mesh to create a secure platform accredited for use across the DoD. Working with the AWS team has enabled EITC to fully meet its federal customers’ needs in a fiscally restricted DoD environment. Deploying the BDP on AWS GovCloud (US) was a key factor in cost reduction by eliminating sustainment costs for hardware, power, space, cooling, facilities, and bandwidth, which enlarged slices of the budget pie for the development of analytics and platform enhancements.

To keep pace with adversaries who constantly change attack vectors and methods, EITC recognized the need to provide analytic developers with a sophisticated analytic framework to rapidly develop and test their analytics at a reasonable cost. EITC developed the Rapid Analytic Deployment and Management Framework (RADMF) that can set up and deploy a BDP environment in AWS GovCloud (US) in minutes.

In RADMF, government customers are developing, testing, and validating analytics; ingesting and visualizing data; and performing computations and algorithms. It has also proven to be an excellent training environment as new analytics are developed and released. RADMF provides an instant feedback loop from the analyst to the development team as they iterate through the development process.

The continuous back-end tech refresh means that developers are always working with the latest BDP release. Kevin Reynolds, CEO of the IT consulting firm RBR-Technologies based in the Baltimore, Md. area, says that for customers who have been frustrated by other systems failing to meet their needs, RADMF has been the perfect solution. RADMF’s “pay as you go” pricing is extra incentive. “With the push of a button, my customers can deploy a full cloud-based analytics environment for only a few thousand dollars per month,” Reynolds says.

Learn more about the AWS GovCloud (US) in the GovCloud track at the AWS Public Sector Summit June 20-21 in Washington DC. Register today.

The Future of Policing: Body Cameras, Video Storage, and Data Management

Body cameras and digital evidence management solutions are the fastest growing technologies in the Justice and Public Safety space. According to a recent survey sponsored by the Department of Homeland Security, 94.5 percent of law enforcement agencies had either implemented or were fully committed to implementing body cameras. Empirical research has shown that body cameras significantly reduce use of force incidents and citizen complaints, helping to improve public trust. AWS cloud technology and its partner ecosystem enable law enforcement to collect, securely store, and analyze body camera and other video data, from dash cameras to surveillance cameras to citizen-generated video.

AWS knows video

AWS has experience and expertise managing the demands and challenges that vast amounts of data requires. Everything from bandwidth, streaming, and content storage to analysis, AWS can help solve video storage and data management needs of any kind.

Netflix delivers billions of hours of content globally by running on AWS. Netflix chose Amazon for redundancy, reliability, and the ability to rapidly implement DevOps capabilities for their business.

The AWS Cloud makes video storage, analysis and data management possible through its scalable, secure, and flexible environment.  This allows police departments to meet evolving video retention mandates, while saving money and streamlining deployment for a multitude of video-based solutions.

Solutions through our partner community

Through AWS and our expert partner community, the AWS Partner Network (APN), you can transform your department with next-generation technology from body worn cameras and video redaction software, to complete digital evidence management systems.

For example, our APN partner Utility, Inc has developed a body-worn technology that has automatic recording triggers based on policies, officer down reporting and alerting, live video streaming, and secure automatic wireless offload to AWS Cloud storage. Utility’s BodyWorn video, audio, and metadata can be accessed through its cloud-based digital evidence management solution, AVaiL Web™. Utility’s other solutions, Rocket IoT™ in-car video and Smart Redaction™ application for releasing video to the public, also leverage AWS Cloud storage providing mission-critical mobile intelligence.

Additionally, Motorola offers a digital evidence management solution that simplifies the way your agency captures, stores, and manages multi-media content. The solution includes the Si Series Video Speaker Microphone that combines voice communications, body-worn video, still images, voice recording and emergency alerting into one compact, easy-to-use device.  Integrated with Motorola’s secure cloud-based CommandCentral Vault digital evidence management software, this solution is streamlining technology and reducing costs for law enforcement everywhere.

Being able to record, transmit, store, redact, and share digital evidence is no easy task. Cloud technology makes the life cycle of video and digital evidence management easier to manage than an on-premises IT environment could. Video storage and data management needs will only continue to increase in volume and importance for your agency in the coming years. Come learn how AWS and our public safety partner community can help you with your video and data management needs, so that you can focus on what matters most and experience the future of policing.

Want to learn more about how Netflix uses AWS for DevOps? Join us at the AWS Public Sector Summit June 20-21 in Washington DC to hear Ben Hagen, Engineering Manager, Cloud Security Tools and Operations, Netflix explore new and interesting ways to make security reactive within cloud environments by dynamically changing the environment in response to and in preparation for security incidents. In addition to Netflix, don’t miss hearing from state and local government leaders, including: Tom Schenk, Chief Data Officer, City of Chicago, Bassam Amrou, CIO, Sacramento County DA, and Jay Haque, Director of Development Operations and Enterprise Computing, The New York Public Library.

For more information on AWS and our public safety APN ecosystem please visit – https://aws.amazon.com/stateandlocal/justice-and-public-safety/

The Evolution of High Performance Computing: Architectures and the Cloud

A guest blog by Jeff Layton, Principal Tech, AWS Public Sector

In High Performance Computing (HPC), users are performing computations that no one ever thought would be considered. For example, there are researchers performing a statistical analysis of the voting records of the Supreme Court, sequencing genomes of humans, plants, and animals, creating deep learning networks for object and facial recognition so that cars and Unmanned Aerial Vehicles (UAVs) can guide themselves, searching for new planets in the galaxy, looking for trends in human behavioral patterns, analyzing social patterns in user habits, targeting advertisement development and placement, and thousands of other applications.

From lotions to aircrafts, the products and services that are connected with HPC touch us each and every day, and we often don’t even realize it.

A great number of these applications are coming from the use of the massive amount of data that has been collected and stored. This is true of classic HPC applications or new HPC applications, such as deep learning that need massive data sets for learning and large stat sets for testing the model. These are very data-driven applications and their scale is getting larger every day.

A key feature of this “new” HPC is that it needs to be flexible and scalable to accommodate these new applications and the associated sea of data. New applications and algorithms are developed each year and their characteristics can vary widely, resulting in the need for increasingly diverse hardware support and new software architectures.

The cloud allows users to dynamically create architectures as they are needed, using the right amount of compute power (CPU or GPU), network, databases, data storage, and analysis tools. Rather than the classic model of fitting the application software to the hardware, the cloud allows the application software to define the infrastructure.

The cloud has a number of capabilities that map to the evolving nature of HPC, including:

  1. Scale and Elasticity
  2. Code as Infrastructure
  3. Ability to experiment

Scale and Elasticity

Thousands upon thousands of compute resources, massive storage capacity, and high-performance network resources are available worldwide via the cloud.

Combining scale and elasticity creates a capability for HPC cloud users that doesn’t exist for centralized shared HPC resources. If resources can be provisioned and scaled as needed and there is a large pool of resources, then waiting in job queues are a thing of the past. Each HPC user in the cloud can have access to their own set of HPC resources, such as compute, networking, and storage resources for their own specific applications with no need to share the resources with other users. They have zero queue time and can create architectures that their applications need.

Code as Infrastructure

Cloud computing also features the ability to build or assemble architectures or systems using only software (code), in which software serves as the template for provisioning hardware. Instead of having to assemble physical hardware in a specific location and manage such things as cabling, cabling labels, switch configuration, router software, and patching, HPC in the cloud allows the various components to be specified by writing a small amount of code, making it easy to expand or contract or even re-architect on-the-fly.

Code as infrastructure addresses the classic HPC problem of inflexible hardware and architecture. However, if a classic cluster architecture is needed, then that can be easily created in the cloud. If a different application needs a Hadoop architecture or perhaps a Spark architecture, then those too can be created. Only the software changes.

Ability to Experiment

As HPC continues to evolve, new applications are being developed that take advantage of experimentation, test, and iteration. These applications may involve new architectures or even re-thinking how the applications are written (re-interpretation). Having access to modular, fungible resources as a set of building blocks that can be configured and reconfigured as-needed is crucial for this new approach.

This will become even more important as HPC moves forward because the new wave of applications are heavily oriented toward massive data. Pattern recognition, machine learning, and deep learning are examples of these new applications and being able to create new architectures will allow these applications to flourish and develop based on the scale and flexibility of the cloud and corresponding economics.


 

See how HPC is used for open data and scientific computing here: www.aws.amazon.com/scico and www.aws.amazon.com/opendata. And check out Jeff’s previous blog The Evolution of High Performance Computing.

AWS Public Sector Summit Countdown: Learn how NGA, DOJ, and NREL are Innovating with Cloud Computing

The agenda is now live for the AWS Public Sector Summit!  Governments, educational institutions, and nonprofits from around the world are coming to DC to share their journey to the cloud with you. You can expect to hear updates on new services and offerings from AWS tech experts, as well as insight from your peers in deep dive sessions and interactive panel discussions on:

  • Using AWS to Meet Requirements for HIPAA, FERPA, and CJIS
  • Next Generation Open Data Platforms
  • Policy as a Strategic Enabler for Cloud Adoption
  • Hybrid Architectures: It’s Not All or Nothing
  • Security Updates
  • Adoption Models: How Different Organizations are Approaching Cloud Adoption

Don’t miss hearing from leaders across the public sector, including:

  • Sue Gordon, Deputy Director, National Geospatial-Intelligence Agency (NGA)
  • Prad Prasoon, Business Technology Strategist, American Heart Association (AHA)
  • Jay Haque, Director of Development Operations and Enterprise Computing, The New York Public Library
  • Debbie Brodt-Giles, Digital Assets Supervisor, National Renewable Energy Laboratory (NREL)
  • Adrian Farley, CIO, CA Department of Justice (DOJ)

These are just some of the leaders who will be sharing their perspectives. View session details, including more featured speakers, titles, and abstracts here.

We will also be announcing the City on a Cloud Innovation Challenge winners during the keynote on June 21. And we are thrilled to have AWS CEO Andy Jassy joining us at this year’s Summit – don’t miss hearing his insights on June 21!

Will you be joining us? Register now for the complimentary event!

Rapidly Recover Mission-Critical Systems in a Disaster

Due to common hardware and software failures, human errors, and natural phenomena, disasters are inevitable, but IT infrastructure loss shouldn’t be.  With the AWS cloud, you can rapidly recover mission-critical systems while optimizing your Disaster Recovery (DR) budget.

Thousands of public sector customers, like St Luke’s Anglican School in Australia and the City of Asheville in North Carolina, rely on AWS to enable faster recovery of their on-premises IT systems without unnecessary hardware, power, bandwidth, cooling, space, and administration costs associated with managing duplicate data centers for DR.

The AWS cloud lets you back up, store, and recover IT systems in seconds by supporting popular DR approaches from simple backups to hot standby solutions that failover at a moment’s notice. And with 12 regions (and 5 more coming this year!) and multiple AWS Availability Zones (AZs), you can recover from disasters anywhere, any time. The following figure shows a spectrum for the four scenarios, arranged by how quickly a system can be available to users after a DR event.

These four scenarios include:

  1. Backup and Restore – This simple and low cost DR approach backs up your data and applications from anywhere to the AWS cloud for use during recovery from a disaster. Unlike conventional backup methods, data is not backed up to tape. Amazon Elastic Compute Cloud (Amazon EC2) computing instances are only used as needed for testing. With Amazon Simple Storage Service (Amazon S3), storage costs are as low as $0.015/GB stored for infrequent access.
  2. Pilot Light – The idea of the pilot light is an analogy that comes from gas heating. In that scenario, a small flame that’s always on can quickly ignite the entire furnace to heat up a house. In this DR approach, you simply replicate part of your IT structure for a limited set of core services so that the AWS cloud environment seamlessly takes over in the event of a disaster. A small part of your infrastructure is always running simultaneously syncing mutable data (as databases or documents), while other parts of your infrastructure are switched off and used only during testing. Unlike a backup and recovery approach, you must ensure that your most critical core elements are already configured and running in AWS (the pilot light). When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.
  3. Warm Standby – The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud. A warm standby solution extends the pilot light elements and preparation. It further decreases the recovery time because some services are always running. By identifying your business-critical systems, you can fully duplicate these systems on AWS and have them always on.
  4. Multi-Site – A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active- active configuration. The data replication method that you employ will be determined by the recovery point that you choose, either Recovery Time Objective (the maximum allowable downtime before degraded operations are restored) or Recovery Point Objective (the maximum allowable time window whereby you will accept the loss of transactions during the DR process).

Learn more about using AWS for DR in this white paper. And also continue to learn about backup and restore architectures, both using partner products and solutions, that assist in backup, recovery, DR, and continuity of operations (COOP) at the AWS Public Sector Summit in Washington, DC on June 20-21, 2016. Learn more about the complimentary event and register here.

 

Bring Your Own Windows 7 Licenses for Amazon Workspaces

Guest post by Len Henry, Senior Solutions Architect, Amazon Web Services

Amazon WorkSpaces is our managed virtual desktop service in the cloud. You can easily provision cloud-based desktops and allow users to access your applications and resources from any supported device. The Bring Your Own Windows 7 Licenses (BYOL) feature of Amazon Workspaces furthers our commitment to providing you with lower costs and greater control of your IT resources.

If you are a Microsoft Volume License license-holder with tools and processes for managing Windows desktop solutions, you can reduce the cost for your WorkSpaces (up to 16% less per month) and you can use your existing Desktop image for your Workspaces. Let’s get started.

Architectural Designs

Your WorkSpaces can access your on-premises resources when you extend your network into AWS. You can also extend your existing Active Directory into AWS. This white paper describes how you achieve connectivity and the images below take you through different points of connection.

Figure 1 Amazon WorkSpaces when using an AWS Directory Service and a VPN Connection

Figure 2 Amazon WorkSpaces when using an AWS Directory Service and a Direct Connect

As a part of the implementation, you will create a Dedicated VPC.  You will also create a Dedicated Directory Service (the Dedicated Directory option will not be present until the WorkSpaces team enables the BYOL account). You can use AWS Workspaces with your existing Active Directory or one of the AWS Directory Services.

You can extend your Active Directory into AWS by deploying additional Domain controllers into the AWS cloud or using our managed Directory Service’s AD Connector feature to proxy your existing Active Directory. We provide you with specific guidance on how to extend your on-premises network here. You can use our Directory Service to create three types of directories:

  1. Simple AD:  Samba 4 powered Active Directory compatible directory in the cloud.
  2. Microsoft AD: Powered by Windows Server 2012 R2.
  3. AD Connector:  Recommended for leveraging your on-premises Active Directory.

Your choice of Directory Service depends on the size of your Active Directory and your need for specific Active Directory features. Learn more here.

With BYOL, you use your 64 bit Windows 7 Desktop Image on hardware that is dedicated to you. We use your image to provision WorkSpaces and validate that it is compatible with our service.

Typical milestones (and suggested stakeholders) for your implementation:

You provide estimates to us of your initial and expected growth of active WorkSpaces.  AWS selects resources for your WorkSpaces based on your needs.  Your BYOL WorkSpaces are deployed on dedicated hardware to allow you to use your existing software license. Tools and AWS features include:

  • OVA – You provide images for BYOL in the OVA industry standard format for Virtual Machines. You can use any of the following software to export to an OVA: Oracle VM VirtualBox, VMWare VSphere, Microsoft System Center 2012 Virtual Machine Manager, and Citrix XenServer.
  • VM Import – You will use VM import in the AWS Command Line Interface (CLI) (AWS CLI).  You execute import image after your OVA has been imported into Amazon Simple Storage Service (Amazon S3).
  • VPC Wizard – You will create several VPC resources for your BYOL VPC. The VPC Wizard can create your VPC and configure public/private subnets and even a hardware VPN.
  • AWS Health Check Website – You can use this site to check if your local network meets the requirements for using WorkSpaces. You also get a suggestion for the region you should deploy your WorkSpaces in.

A proof of concept (POC) with public bundles will give your team experience using and supporting WorkSpaces.  A POC can help verify your network, security, and other configurations. By submitting a base Windows 7 image, you reduce the likelihood of your customizations impacting on-boarding. You can customize your image after on-boarding and you can have regularly scheduled meetings with your AWS account team to make it easier to coordinate on your implementation.

With WorkSpaces, you can reduce the work necessary to manage a Virtual Desktop Infrastructure solution. This automation can help you to manage a large number of users. The Workspaces API provides you commands for typical WorkSpaces use cases: creating a WorkSpace, checking the health of a WorkSpace, and rebooting a WorkSpace. You can use the WorkSpaces API to create a portal for managing your WorkSpaces or for user self-service.

In order to ensure that you are ready to get started with BYOL, please reach out to your AWS account manager, solutions architect, or sales representative, or create a Technical Support case with Amazon WorkSpaces. Please contact us to get started using BYOL here.


Learn more about WorkSpaces and other enterprise applications at the complimentary AWS Public Sector Summit in Washington, DC June 20-21, 2016.

The Future of Policing is in the Cloud

Imagine a world with connected citizens and community engagement empowered by innovative technology. This is a place with open dialogue between police and their citizens and between departments. This isn’t a utopian world; this type of connected community can be realized today. Law enforcement agencies throughout the country are increasing transparency, instilling trust, and making more informed policing decisions to better protect their citizens.

Technology can help get us to this collaborative future state through body-worn cameras, video redaction, e-citation, crime forecasting, digital evidence management, mobile forensic software, public records management, major event risk management, and e-discovery.

The White House Police Data Initiative (PDI) is accelerating progress around public safety data transparency and analysis, with a strong emphasis on sharing information and communicating. Greater transparency translates into stronger communities. Residents can go and learn, get answers to their common questions, and share real stories in a central place. This open dialogue allows citizens to be empowered through real data and understand what steps are being taken by their police department to keep their community safe.

Police departments involved in the PDI are working to connect their communities and improve public safety. The 53 police departments involved are committed to releasing 101 datasets never before seen by the public and are establishing new practices that will allow for knowledge sharing, community-sourced problem solving, and documentation of best practices for police departments nationwide.

Through the use of open data powered by the AWS cloud, police departments can securely share and collaborate, enabling the future of policing in departments throughout the country. The most trusted names in law enforcement, like Motorola and Socrata, trust AWS. We focus on securing your IT infrastructure, so you can focus on what matters most—protecting your citizens.

  1. Trust – We are committed to the public safety community and design products and services with you in mind. When the stakes are high, you shouldn’t have to worry about your mission critical apps going down.  AWS is reliable, fault tolerant, and highly available.
  2. Security – AWS is an expert in physical and virtual security. We have more resources dedicated to manual and automated security measures than you can employ in-house, including encryption, audit trail, security-by-design, and network and security monitoring.
  3. Cost Savings – Spend your limited budget on what matters most. Our customers benefit from economies of scale as we have dropped our prices 51 times. With AWS, you can experience 64.3% savings when compared to your on-premises environment.

In addition to building trust, providing a secure infrastructure, and helping with cost savings, AWS complies with the FBI’s Criminal Justice Information Services Division (CJIS) standard. We sign CJIS security agreements with our customers, including allowing or performing any required employee background checks. We also have an isolated AWS region designed to host sensitive data and regulated workloads in the cloud that is managed by US persons only—the AWS GovCloud(US).

The future of policing can be realized today. Powered by the AWS cloud, come see what Socrata and AWS can do for your department. And check out our other partner solutions here.

Amazon Web Services Pledges Training and Certifications for Veterans

Amazon pledged to offer 10,000 service members, transitioning veterans, and military spouses over $7m in Amazon Web Services (AWS) trainings. This pledge is part of Joining Forces, the First Lady and Dr. Jill Biden’s initiative that works hand in hand with the public and private sectors to ensure that service members, veterans, and their families have the tools they need to succeed throughout their lives. By offering cloud computing training, we are excited to help veterans transition more easily into the civilian workforce and help develop the skills necessary to pursue jobs in the high-demand cloud computing space.

With the dramatically increasing demand for employees skilled in cloud computing, AWS is providing an academic gateway for the next generation of IT and cloud professionals from the military. These training pledges include:

  • Free membership to AWS Educate, Amazon’s global initiative to provide students and educators with the resources needed to greatly accelerate cloud-related learning endeavors and to help power the entrepreneurs, workforce, and researchers of tomorrow.  This includes $50 in credits for AWS cloud services, training courses like AWS Tech Essentials, a wide library of cloud content, and access to our collaboration portal.
  • Free access to over 90 labs on AWS services and solutions, as well as certification prep labs.
  • Eligibility for AWS Certification exam reimbursement from the Department of Veterans Administration under the GI Bill’s education provision.

Amazon is also committed to training 25 wounded warriors at AWS Boot Camps for functional roles in cloud computing and commercial companies operating in the tech space and hiring 25,000 veterans and military spouses over the next five years. Learn more in the Amazon blog here.

At Amazon, we are extremely committed to military service members and their spouses. And we want to thank all active and retired military members for their service and look forward to working with transitioning veterans.

Learn more about AWS Educate here.

 

Remember the Alamo: A Story Bigger than Texas

Remember the Alamo? About two million annual visitors do, making the historic site the most visited tourism destination in Texas. Most know the Alamo for the 90-minute battle in 1836 that led to Texas’ independence from Mexico, but the 300-year-old Alamo has an even deeper history to tell.

As the flagship tourism site for the state of Texas, the Alamo needed a website that’s worthy of it. The General Land Office (GLO), led by Texas Land Commissioner George P. Bush, set out to create a website that raised the bar far beyond the typical government website and pushed the boundaries of tourism web design. The resulting website features full-screen video, megamenu navigation, responsive design for mobile, and other rich, interactive content to entice and excite prospective visitors of all ages.

However, the site’s large bandwidth needs demanded a solution that could keep up with the 250,000+ unique monthly visitors. To achieve seamless visitor experiences, GLO partnered with AWS to develop and host the new site.

The Alamo used AWS for a flexible, highly scalable, and low-cost way to deliver their website and web applications in a highly customizable way using Amazon Simple Storage Service (Amazon S3) and delivered on Amazon CloudFront.

“Commissioner Bush set the goal of GLO becoming Texas’ technology and innovation leader. This site plus its hosting on Amazon achieves that vision. Just as the engine of the car must work in the most visually appealing vehicle, the back end needs to perform on a website. AWS allows us to deliver a unique user experience. Now, videos do not need to buffer and images are downloaded quickly. This is critical for user experience and helps us get people excited to come visit the Alamo, which was the main goal of this project,” Bryan Preston, Communications Director at Texas GLO.

Moving at an incredible pace and demonstrating leading-edge design, Bryan and team needed an easy way to distribute content to end users with low latency and high data transfer speeds. Amazon CloudFront was used to deliver their entire website, including dynamic, static, and streaming content using a global network of edge locations.

Since not only Texans visit the site, (in fact analytics show people from around the world are interested in this historic place) requests for the content are automatically routed to the nearest edge location, so content is delivered with the best possible performance.

The website is able to give a glimpse into much larger narrative and allows the Alamo to continue to capture the imaginations of Americans after more than two centuries.

Check out their newly revamped website and learn more about hosting your website on AWS here.

We want to know how you would use the cloud to make it easier to deliver services to their citizens. Apply now for the City on a Cloud Innovation Challenge. AWS and a panel of worldwide experts will award a total of $250,000 in AWS promotional credits to eight grand prize winners at the AWS Public Sector Summit.

If every government asset is a sensor, what does that mean for management?

A guest post by Frank DiGiammarino, Director, AWS State and Local Government

I have the opportunity to see local and regional governments around the world engage in innovative and inspiring programs to better serve their citizens. City leaders get creative when faced with pressure to innovate within fixed or shrinking budgets. To save costs, they must look at what they already have and collaborate across agencies to see how they can adapt their services for the new era of technology—the Internet of Things (IoT).

The growth of IoT

With the growth of IoT, more and more objects can be classified as sensors. The police officer with the body camera, the trashcan monitoring waste, and the ambulance rushing to the hospital; all of these “things” can, in fact, be “sensors” that collect vital data.

This should be a welcome advancement as we look to have more connectivity to our citizens and their needs, with little or no extra budget. To take advantage of IoT, city leaders can take small steps into this connected world by starting with what they already have. They can also share across agencies and access more data, eventually making the whole government more efficient.  When data is made available to the public, startups and established businesses can create modern services and programs for the constituent within those cities.

For example, Philips uses IoT to deliver healthcare in the home. They plan to make it easier for patients to self-monitor their health using the AWS Cloud and IoT technologies. “We’ve been working with AWS for a lot of reasons because you can imagine the amount of information you generate from IoT devices,” said Jeroen Tas, CEO, Healthcare Informatics Solutions and Services. “We’ve been working with Amazon Web Services on our IoT platform and of course, we bring in the medical grade view of that. We need that core platform and the core capability of Amazon Web Services to really tie this together into a single system.”

Delivering on this vision requires the ability to collect, store, analyze and cross-reference the data Philips receives from these devices on a large scale and in real-time. Being able to store and analyze the data, Philips is able to take action and deliver results for their patients. Services are in place with AWS IoT– to easily and securely connect sensors to the cloud via devices. Similarly, city leaders can take the sensors in place throughout their city and take action to positively impact their citizens.

So the sensor is there, now what?

  1. Shift the mind frame – First, authorities should be enabled and trained in how best to analyze and act on this data. Often governments are managing based on lagging data, so by the time they get it, the world has changed. Instead, they should be managing the enterprise with leading data.
  2. Understand the data – Innovation occurs in the white space between silos.  The cloud can be used as a low cost, agile option to get the data out of the silos and explore the challenges, opportunities, and needs of the people. By understanding the data, governments can build new tools and services for the people and for their agency. There is a great opportunity for sensor data to help build healthier, greener, safer, and better educated communities.
  3. Drive new thinking – Build citizen-obsessed solutions that focus on how government can make the lives of the people better and easier.  Now that they have data from sensors and open data, we can start to take advantage of technology to enable and deliver what the people need and want: reduction in crime, fixed potholes, and lower bills. Other IoT examples include real-time updates on traffic, road closures, construction, parking, energy, utilities, public safety, citizen connection, predictive maintenance, emergency management, air quality, and waste management.

Sensors, data, and the interpretation of that data are game changers for the outcomes that they can produce. If the sensors are in place, the next step is to aggregate the data and make adjustments for the city. This is big because it affects people where they work, live, and raise their kids.

We want to recognize how you are innovating on behalf of citizens today through our City on a Cloud Innovation Challenge. Only two weeks left (deadline is May 13, 2016) to apply to the challenge. We are giving away a total of $250,000 in AWS promotional credits to eight winners. Share your story today!


Read more from Frank in his paper titled, “Can Government Work like OpenTable?” here.