AWS Partner Network (APN) Blog
Tag: Amazon Redshift Spectrum
How N-iX Developed an End-to-End Big Data Platform on AWS for Gogo
Gogo is a global provider of broadband connectivity products and services for business aviation. It needed a qualified engineering team to undertake a complete transition of its solutions to the cloud, build a unified data platform, and streamline the best speed of the inflight internet. Learn how N-iX developed the data platform on AWS that aggregates data from over 20 different sources using Apache Spark on Amazon EMR.
Modernizing Data Assets from an On-Premises Data Warehouse to Amazon Redshift
Most enterprise customers are trying to migrate their on-premises data warehouse from Oracle or other on-premises solutions to Amazon Redshift. Learn how TEKsystems strives to put forth a series of tools, technologies, and methodologies to meet customers in their current AWS cloud journey path. With a phased migration approach, TEKsystems’s customer realized immediate savings moving off an on-premises data warehouse while running its current applications against Amazon Redshift through partner solutions.
Operational Analytics with MongoDB Atlas and Amazon Redshift
Enterprises are building data analysis capabilities to extract information captured in data, develop an understanding of their business, and channel efforts towards customer centricity. This post explains the need for operational analytics and how it can be achieved with MongoDB Atlas and Amazon Redshift. MongoDB is an AWS Data and Analytics Competency Partner and developer data platform company empowering innovators to unleash the power of software and data.
Creating Unique Customer Experiences with Capgemini’s Next-Gen Customer Intelligence Platforms
Customer experience is at its best when a customer perceives the experience offered is unique and aligns to their preferences. The need to engage, at a very personal level, becomes key. Learn how Capgemini’s data and analytics practice implements customer intelligence platforms on AWS to help companies build a unified data hub. This enables customer data to be converted into insights that can be used for reporting and building AI/ML predictive analytics capabilities.
How to Simplify Machine Learning with Amazon Redshift
Building effective machine learning models requires storing and managing historical data, but conventional databases can quickly become a nightmare to regulate. Queries start taking too long, for example, slowing down business decisions. Learn how to use Amazon Redshift ML and Query Editor V2 to create, train, and apply ML models to predict diabetes cases for a sample diabetes dataset. You can follow a similar approach to address other use cases such as customer churn prediction and fraud detection.
Using AtScale and Amazon Redshift to Build a Modern Analytics Program with a Lake House
There has been a lot of buzz about a new data architecture design pattern called a Lake House. A Lake House approach integrates a data lake with the data warehouse and all of the purpose-built stores so customers no longer have to take a one-size-fits-all approach and are able to select the storage that best suits their needs. Learn how to couple Amazon Redshift with a semantic layer from AtScale to deliver fast, agile, and analysis-ready data to business analysts and data scientists.
Leveraging Serverless Architecture to Build an Enterprise Data Repository Platform for Customer Insights and Analytics
Moving data between multiple data stores requires an extract, transform, load (ETL) process using various data analysis approaches. ETL operations form the backbone of any modern enterprise data and analytics platform. AWS provides a broad range of services to deploy enterprise-grade applications in the cloud. This post explores a strategic collaboration between Tech Mahindra and a customer to build and deploy an enterprise data repository on AWS and create ETL workflows using a serverless architecture.
How SnapLogic eXtreme Helps Visualize Spark ETL Pipelines on Amazon EMR
Fully managed cloud services enable global enterprises to focus on strategic differentiators versus maintaining infrastructure. They do this by creating data lakes and performing big data processing in the cloud. SnapLogic eXtreme allows citizen integrators, those who can’t code, and data integrators to efficiently support and augment data-integration use cases by performing complex transformations on large volumes of data. Learn how to set up SnapLogic eXtreme and use Amazon EMR to do Amazon Redshift ETL.
Change Data Capture from On-Premises SQL Server to Amazon Redshift Target
Change Data Capture (CDC) is the technique of systematically tracking incremental change in data at the source, and subsequently applying these changes at the target to maintain synchronization. You can implement CDC in diverse scenarios using a variety of tools and technologies. Here, Cognizant uses a hypothetical retailer with a customer loyalty program to demonstrate how CDC can synchronize incremental changes in customer activity with the main body of data already stored about a customer.
Best Practices from Onica for Optimizing Query Performance on Amazon Redshift
Effective and economical use of data is critical to your success. As data volumes increase exponentially, managing and extracting value from data becomes increasingly difficult. By adopting best practices that Onica has developed over years of using Amazon Redshift, you can improve the performance of your AWS data warehouse implementation. Onica has completed multiple projects ranging from assessing the current state of an Amazon Redshift cluster to helping tune, optimize, and deploy new clusters.