Videos

A Technical Introduction to Amazon EMR (50:44)
Amazon EMR Deep Dive & Best Practices (49:12)

Stay up to date with AWS webinars.

How to use Amazon EMR

  1. Develop your data processing application. You can use Java, Hive (a SQL-like language), Pig (a data processing language), Cascading, Ruby, Perl, Python, R, PHP, C++, or Node.js. Amazon EMR provides code samples and tutorials to get you up and running quickly.
  2. Upload your application and data to Amazon S3. If you have a large amount of data to upload, you may want to consider using AWS Import/Export Snowball, to upload data using physical storage devices; or AWS Direct Connect, to establish a dedicated network connection from your data center to AWS. If you prefer, you can also write your data directly to a running cluster.
  3. Configure and launch your cluster. Using the AWS Management Console, the AWS CLI, SDKs, or APIs, specify the number of Amazon EC2 instances to provision in your cluster, the types of instances to use (standard, high memory, high CPU, high I/O, etc.), the applications to install (Hive, Pig, HBase, etc.), and the location of your application and data. You can use Bootstrap Actions to install additional software or change default settings.
  4. Monitor the cluster (Optional). You can monitor the cluster’s health and progress using the Management Console, Command Line Interface, SDKs, or APIs. EMR integrates with Amazon CloudWatch for monitoring/alarming and supports popular monitoring tools like Ganglia. You can add/remove capacity to the cluster at any time to handle more or less data. For troubleshooting, you can use the console’s simple debugging GUI.
  5. Retrieve the output. Retrieve the output from Amazon S3 or HDFS on the cluster. Visualize the data with tools like Tableau and MicroStrategy. Amazon EMR will automatically terminate the cluster when processing is complete. Alternatively you can leave the cluster running and give it more work to do.

Are you ready to launch your first cluster?

Click here to launch a cluster using the Amazon EMR Management Console. On the Create Cluster page, go to Advanced cluster configuration, and click on the gray "Configure Sample Application" button at the top right if you want to run a sample application with sample data.

Tutorials

Spark

Learn how to set up Apache Kafka on EC2, use Spark Streaming on EMR to process data coming in to Apache Kafka topics, and query streaming data using Spark SQL on EMR.

Learn how Intent Media used Spark and Amazon EMR for their modeling workflows.

HBase

Learn how to connect to Phoenix using JDBC, create a view over an existing HBase table, and create a secondary index for increased read performance

Learn how to launch an EMR cluster with HBase and restore a table from a snapshot in Amazon S3

Learn how to connect to a Hive job flow running on Amazon Elastic MapReduce to create a secure and extensible platform for reporting and analytics.

Presto

Learn how to set up a Presto cluster and use Airpal to process data stored in S3.

Hive

Learn how to launch an EMR cluster with HBase and restore a table from a snapshot in Amazon S3

Learn how to connect to a Hive job flow running on Amazon Elastic MapReduce to create a secure and extensible platform for reporting and analytics.

This tutorial outlines a reference architecture for a consistent, scalable, and reliable stream processing pipeline that is based on Apache Flink using Amazon EMR, Amazon Kinesis, and Amazon Elasticsearch Service.

Learn at your own pace with other tutorials.

Training and help

Short term engagements

Do you need help building a proof of concept or tuning your EMR applications? AWS has a global support team that specializes in EMR. Please contact us if you are interested in learning more about short term (2-6 week) paid support engagements.

AWS Big Data training

The Big Data on AWS course is designed to teach you with hands-on experience on how to use Amazon Web Services for big data workloads. AWS will show you how to run Amazon EMR jobs to process data using the broad ecosystem of Hadoop tools like Pig and Hive. Also, AWS will teach you how to create big data environments in the cloud by working with Amazon DynamoDB and Amazon Redshift, understand the benefits of Amazon Kinesis, and leverage best practices to design big data environments for analysis, security, and cost-effectiveness. To learn more about the Big Data course, click here.

Additional training

Scale Unlimited offers customized on-site training for companies that need to quickly learn how to use EMR and other big data technologies. To find out more, click here.

Discover more Amazon EMR resources

Visit the resources page
Ready to build?
Get started with Amazon EMR
Have more questions?
Contact us