Amazon EMR enables you to quickly and easily provision as much capacity as you need and add or remove capacity at any time. This is very useful if you have variable or unpredictable processing requirements. For example, if the bulk of your processing occurs at night, you might need 100 instances during the day and 500 instances at night. Alternatively, you might need a significant amount of capacity for a short period of time. With Amazon EMR you can quickly provision hundreds or thousands of instances, and shut them down when your job is complete (to avoid paying for idle capacity).

elastic

There are two main options for adding or removing capacity:

Deploy multiple clusters: If you need more capacity, you can easily launch a new cluster and terminate it when you no longer need it. There is no limit to how many clusters you can have. You may want to use multiple clusters if you have multiple users or applications. For example, you can store your input data in Amazon S3 and launch one cluster for each application that needs to process the data. One cluster might be optimized for CPU, a second cluster might be optimized for storage, etc.

Resize a running cluster: With Amazon EMR it is easy to resize a running cluster. You may want to resize a cluster if you are storing your data in HDFS and you want to temporarily add more processing power. For example, some customers add hundreds of instances to their clusters when their batch processing occurs, and remove the extra instances when processing completes.

Amazon EMR is designed to reduce the cost of processing large amounts of data. Some of the features that make it low cost include low hourly pricing, Amazon EC2 Spot integration, Amazon EC2 Reserved Instance integration, elasticity, and Amazon S3 integration.

Low Hourly Pricing: Amazon EMR pricing is per instance hour and starts at $.015 per instance hour for a small instance ($131.40 per year). See the pricing section for more detail.

Amazon EC2 Spot Integration: Amazon EC2 Spot Instances allow you to name your own price for Amazon EC2 capacity. You simply specify the maximum hourly price that you are willing to pay to run a particular instance type. As long as your bid price exceeds the Spot market price, you will keep the instances and typically pay a fraction of the On-Demand price. The Spot Price fluctuates based on supply and demand for instances, but you will never pay more than the maximum price you specified. Amazon EMR makes it easy to use Spot instances so you can save both time and money. Amazon EMR clusters include 'core nodes' that run HDFS and ‘task nodes’ that do not; task nodes are ideal for Spot because if the Spot price increases and you lose those instances you will not lose data stored in HDFS. (Learn more about core and task nodes.)

Amazon EC2 Reserved Instance Integration: Amazon EC2 Reserved Instances enable you to maintain the benefits of elastic computing while lowering costs and reserving capacity. With Reserved Instances you pay a low, one-time fee and in turn receive a significant discount on the hourly charge for that instance. Amazon EMR makes it easy to utilize Reserved Instances so you can save up to 65% off the On-Demand price.

Elasticity: Because Amazon EMR makes it easy to add and remove capacity, you don’t need to provision excess capacity. For example, you may not know how much data your cluster(s) will be handling in 6 months, or you may have spikey processing needs. With Amazon EMR you don't need to guess your future requirements or provision for peak demand because you can easily add/remove capacity at any time.

Amazon S3 Integration: The EMR File System (EMRFS) allows EMR clusters to efficiently and securely use Amazon S3 as an object store for Hadoop. You can store your data in Amazon S3 and use multiple Amazon EMR clusters to process the same data set. Each cluster can be optimized for a particular workload, which can be more efficient than a single cluster serving multiple workloads with different requirements. For example, you might have one cluster that is optimized for I/O and another that is optimized for CPU, each processing the same data set in Amazon S3. In addition, by storing your input and output data in Amazon S3, you can shut down clusters when they are no longer needed. 

EMRFS has strong performance reading from and writing to Amazon S3, supports encryption for data at rest with S3 server-side encryption, and offers an optional consistent view which checks for list and read-after-write consistency for objects tracked in its metadata. Also, EMR clusters can use both EMRFS and HDFS, so you don’t have to choose between on-cluster storage and Amazon S3.

With Amazon EMR, you can leverage multiple data stores, including Amazon S3, the Hadoop Distributed File System (HDFS), and Amazon DynamoDB.

data-stores

Amazon S3: Amazon S3 is Amazon Web Services’ highly durable, scalable, secure, fast, and inexpensive storage service. With the EMR File System (EMRFS), Amazon EMR can efficiently and securely use Amazon S3 as an object store for Hadoop. Amazon EMR has made numerous improvements to Hadoop, allowing you to seamlessly process large amounts of data stored in Amazon S3. Also, EMRFS can enable consistent view to check for list and read-after-write consistency for objects in Amazon S3 and use S3 server-side encryption for encryption at rest.

When you launch your cluster, Amazon EMR streams the data from Amazon S3 to each instance in your cluster and begins processing it immediately. One advantage of storing your data in Amazon S3 and processing it with Amazon EMR is you can use multiple clusters to process the same data. For example, you might have a Hive development cluster that is optimized for memory and a Pig production cluster that is optimized for CPU both using the same input data set.

Hadoop Distributed File System (HDFS): HDFS is the Hadoop file system. In Amazon EMR, HDFS uses local ephemeral storage. Depending on the instance type, this could be spinning disks or solid state drives. Every instance in your cluster has local ephemeral storage, but you decide which instances run HDFS. Amazon EMR refers to instances running HDFS as ‘core nodes’ and instances not running HDFS as ‘task nodes’.

Amazon DynamoDB: Amazon DynamoDB is a fast, fully managed NoSQL database service. Amazon EMR has direct integration with Amazon DynamoDB so you can quickly and efficiently process data stored in Amazon DynamoDB and transfer data between Amazon DynamoDB, Amazon S3, and HDFS in Amazon EMR.

Other AWS Data Stores: Amazon EMR customers also use Amazon Relational Database Service (a web service that makes it easy to set up, operate, and scale a relational database in the cloud), Amazon Glacier (an extremely low-cost storage service that provides secure and durable storage for data archiving and backup), and Amazon Redshift (a fast, fully managed, petabyte-scale data warehouse service). AWS Data Pipeline is a web service that helps customers reliably process and move data between different AWS compute and storage services (including Amazon EMR) as well as on-premises data sources at specified intervals.

EMR supports powerful and proven Hadoop tools such as Hive, Pig, HBase, and Impala.

Hive is an open source data warehouse and analytics package that runs on top of Hadoop. Hive is operated by Hive QL, a SQL-based language which allows users to structure, summarize, and query data. Hive QL goes beyond standard SQL, adding first-class support for map/reduce functions and complex extensible user-defined data types like JSON and Thrift. This capability allows processing of complex and unstructured data sources such as text documents and log files. Hive allows user extensions via user-defined functions written in Java. Amazon EMR has made numerous improvements to Hive, including direct integration with Amazon DynamoDB and Amazon S3. For example, with Amazon EMR you can load table partitions automatically from Amazon S3, you can write data to tables in Amazon S3 without using temporary files, and you can access resources in Amazon S3 such as scripts for custom map/reduce operations and additional libraries. Learn more about Hive and EMR.

Pig is an open source analytics package that runs on top of Hadoop. Pig is operated by Pig Latin, a SQL-like language which allows users to structure, summarize, and query data. As well as SQL-like operations, Pig Latin also adds first-class support for map/reduce functions and complex extensible user defined data types. This capability allows processing of complex and unstructured data sources such as text documents and log files. Pig allows user extensions via user-defined functions written in Java. Amazon EMR has made numerous improvements to Pig, including the ability to use multiple file systems (normally Pig can only access one remote file system), the ability to load customer JARs and scripts from Amazon S3 (e.g. “REGISTER s3:///my-bucket/piggybank.jar”), and additional functionality for String and DateTime processing. Learn more about Pig and EMR.

HBase is an open source, non-relational, distributed database modeled after Google's BigTable. It was developed as part of Apache Software Foundation's Hadoop project and runs on top of Hadoop Distributed File System(HDFS) to provide BigTable-like capabilities for Hadoop. HBase provides you a fault-tolerant, efficient way of storing large quantities of sparse data using column-based compression and storage. In addition, HBase provides fast lookup of data because data is stored in-memory instead of on disk. HBase is optimized for sequential write operations, and it is highly efficient for batch inserts, updates, and deletes. HBase works seamlessly with Hadoop, sharing its file system and serving as a direct input and output to Hadoop jobs. HBase also integrates with Apache Hive, enabling SQL-like queries over HBase tables, joins with Hive-based tables, and support for Java Database Connectivity (JDBC). With Amazon EMR you can back up HBase to Amazon S3 (full or incremental, manual or automated) and you can restore from a previously created backup. Learn more about HBase and EMR.

Impala is an open source tool in the Hadoop ecosystem for interactive, ad hoc querying using SQL syntax. Instead of using MapReduce, it leverages a massively parallel processing (MPP) engine similar to that found in traditional relational database management systems (RDBMS). With this architecture, you can query your data in HDFS or HBase tables very quickly, and leverage Hadoop's ability to process diverse data types and provide schema at runtime. This lends Impala to interactive, low-latency analytics. Also, this application supports user defined functions in Java and C++, and can connect to BI tools through ODBC and JDBC drivers. Impala uses the Hive metastore to hold information about the input data, including the partition names and data types. Learn more about Impala and EMR.

Other: Amazon EMR also supports a variety of other popular applications and tools, such as R, Mahout (machine learning), Ganglia (monitoring), Spark (in-memory distributed processing), SparkSQL (data warehouse on Spark), Accumulo (secure NoSQL database), Sqoop (relational database connector), HCatalog (table and storage management), and more.

Use the MapR Distribution: MapR delivers on the promise of Hadoop with a proven, enterprise-grade platform that supports a broad set of mission-critical and real-time production uses. MapR brings unprecedented dependability, ease-of-use and world-record speed to Hadoop, NoSQL, database and streaming applications in one unified Big Data platform. Learn more about using MapR on EMR.

Tune Your Cluster: You choose what types of EC2 instances to provision in your cluster (standard, high memory, high CPU, high I/O, etc.) based on your application’s requirements. You have root access to every instance and you can fully customize your cluster to suit your requirements. Learn more about supported EC2 Instance Types.

Debug Your Applications: When you enable debugging on a cluster, Amazon EMR archives the log files to Amazon S3 and then indexes those files. You can then use a graphical interface to browse the logs in an intuitive way. Learn more about debugging EMR jobs.

Monitor Your Cluster: You can use Amazon CloudWatch to monitor 23 custom Amazon EMR metrics, such as the average number of running map and reduce tasks. You can also set alarms on these metrics. Learn more about monitoring EMR clusters.

Schedule Recurring Workflows: You can use AWS Data Pipeline to schedule recurring workflows involving Amazon EMR. AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on-premise data sources at specified intervals. Learn more about EMR and Data Pipeline.

Cascading: Cascading is an open-source Java library that provides a query API, a query planner, and a job scheduler for creating and running Hadoop MapReduce applications. Applications developed with Cascading are compiled and packaged into standard Hadoop-compatible JAR files similar to other native Hadoop applications. Learn more about Cascading and EMR.

Control Network Access to Your Cluster: You can launch your cluster in an Amazon Virtual Private Cloud (VPC), a logically isolated section of the AWS cloud. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. Learn more about EMR and Amazon VPC.

Manage Users and Permissions: You can use AWS Identity & Access Management (IAM) tools such as IAM Users and Roles to control access and permissions. For example, you could giver certain users read but not write access to your clusters. Learn more about controlling access to your cluster.

Install Additional Software: You can use bootstrap actions to install additional software and to change the configuration of applications on the cluster. Bootstrap actions are scripts that are run on the cluster nodes when Amazon EMR launches the cluster. They run before Hadoop starts and before the node begins processing data. You can write custom bootstrap actions, or use predefined bootstrap actions provided by Amazon EMR. Learn more about EMR Bootstrap Actions.

Efficiently Copy Data: You can quickly move large amounts of data from Amazon S3 to HDFS, from HDFS to Amazon S3, and between Amazon S3 buckets using Amazon EMR’s S3DistCp, an extension of the open source tool Distcp, which uses MapReduce to efficiently move large amounts of data. Learn more about S3DistCp.

Hadoop Streaming: Hadoop Streaming is a utility that comes with Hadoop that enables you to develop MapReduce executables in languages other than Java. Streaming is implemented in the form of a JAR file. Learn more about Hadoop Streaming with EMR.

Custom Jar: Write a Java program, compile against the version of Hadoop you want to use, and upload to Amazon S3. You can then submit Hadoop jobs to the cluster using the Hadoop JobClient interface. Learn more about Custom Jar processing with EMR.

EMR can be used with a wide variety of third party software tools, such as:

BI/Visualization

Hadoop Distribution

Graphical IDE

Data Transfer

Analytics Platform

Business Intelligence

Monitoring

BI/Visualization

Graphical IDE

Data Transformation

Performance Tuning

BI/Visualization