To get started with Amazon Redshift for free:
- Step through the Getting Started tutorial to create and provision your first cluster, then load and query sample data in minutes.
- For a conceptual introduction, see Amazon Redshift System Overview.
- Gain hands-on practice with Amazon Redshift through this free self-paced lab and this series of hands-on labs.
- When you're ready to start loading and analyzing your own data, see Designing Tables, Loading Data, Designing Queries in the Database Developer Guide.
- Analyze your data with any SQL client using industry-standard ODBC/JDBC connections. See Connecting to a Cluster in Management Guide for details.
- Check out business intelligence (BI) and data integration (ETL) vendors that have certified Amazon Redshift for use with their tools.
- Get trained by attending an instructor-led class.
Learn more about Amazon Redshift and how to get started from the resources below. To request support for your data warehouse proof-of-concept or evaluation, click here.
Try Amazon Redshift for free
Start Free TrialLearn More
Get 750 free DC1.Large hours per month for 2 months. To start the trial:
Create an AWS account and sign in to the Amazon Redshift Console
Launch an Amazon Redshift cluster and select DC1.Large for Node Type
Request support for your proof-of-concept
Report and ingest data for free using our Partner Free Trials
Adobe Flash Player or a modern browser is required to view videos on this site.
This self-paced enables you to test-drive Amazon Redshift and gain practical experience working with it. Designed by AWS subject matter experts, this hands-on training lab provides you step-by-step instructions to help you gain confidence working with Amazon Redshift and learn more about building your data warehouse on AWS.
In this advanced lab series, you will delve deeper into the uses and capabilities of Amazon Redshift. You will use a remote SQL client to create and configure tables, and gain practice loading large data sets into Amazon Redshift. You will explore the effects of schema variations and compression, visualize the data, and run predictive analytics.