AWS Partner Network (APN) Blog

Tag: Data Warehouse

Heimdall-Data-AWS-Partners

Advanced Connection Pooling with the Heimdall Proxy

As databases are often a key component of internet infrastructure, IT departments may be challenged by poor connection management from the application. The Heimdall Proxy helps developers, database administrators, and architects horizontally scale out and optimize connections through connection pooling for Amazon Amazon RDS and Amazon Redshift without any application changes. As a result, you will reduce your database instance size and support higher user counts.

Confluent-AWS-Partners

Accelerate Data Warehousing by Streaming Data with Confluent Cloud into Amazon Redshift

Built as a cloud-native service, Confluent Cloud offers developers a serverless experience with elastic scaling and pricing that charges only for what they stream. Confluent’s Kafka Connect Amazon Redshift Sink Connector exports Avro, JSON Schema, or Protobuf data from Apache Kafka topics to Amazon Redshift. The connector polls data from Kafka and writes this data to an Amazon Redshift database. Polling data is based on subscribed topics.

Onica-AWS-Partners

Best Practices from Onica for Optimizing Query Performance on Amazon Redshift

Effective and economical use of data is critical to your success. As data volumes increase exponentially, managing and extracting value from data becomes increasingly difficult. By adopting best practices that Onica has developed over years of using Amazon Redshift, you can improve the performance of your AWS data warehouse implementation. Onica has completed multiple projects ranging from assessing the current state of an Amazon Redshift cluster to helping tune, optimize, and deploy new clusters.

Snowflake-AWS-Partners

Analyze Streaming Data from Amazon Managed Streaming for Apache Kafka Using Snowflake 

When streaming data comes in from a variety of sources, organizations should have the capability to ingest this data quickly and join it with other relevant business data to derive insights and provide positive experiences to customers. Learn how you can build and run a fully managed Apache Kafka-compatible Amazon MSK to ingest streaming data, and explore how to use a Kafka connect application to persist this data to Snowflake. This enables businesses to derive near real-time insights into end users’ experiences and feedback.

Fivetran_AWS-Service-Ready

Enabling Customer Attribution Models on AWS with Automated Data Integration

Attribution models allow companies to guide marketing, sales, and support efforts using data, and then custom tailor every customer’s experience for maximum effect. Combined together, cloud-based data pipeline tools like Fivetran and data warehouses like Amazon Redshift form the infrastructure for integrating and centralizing data from across a company’s operations and activities, enabling business intelligence and analytics activities.

How to Create a Continually Refreshed Amazon S3 Data Lake in Just One Day

Data management architectures have evolved drastically from the traditional data warehousing model, to today’s more flexible systems that use pay-as-you-go cloud computing models for big data workloads. Learn how AWS services like Amazon EMR can be used with Bryte Systems to deploy an Amazon S3 data lake in one day. We’ll also detail how AWS and the BryteFlow solution can automate modern data architecture to significantly accelerate delivery and business insights at scale.

Driving Hybrid Cloud Analytics with Amazon Redshift and Denodo Data Virtualization

A data integration architecture that can virtually connect multiple data platforms provides business users with immediate access to data, with far less IT friction than traditional methods, so you can make faster, more data-driven decisions. The Denodo Platform for AWS can aid organizations in managing their data by providing an alternative data integration method. With Denodo, data is presented in real-time and without the need to replicate data to a new consolidated repository.

Datacoral_AWS Solutions

Building Serverless Data Pipelines on Amazon Redshift By Writing SQL with Datacoral

Amazon Redshift is a powerful yet affordable data warehouse, and while getting data out of Redshift is easy, getting data into and around Redshift can pose problems as the warehouse grows. Datacoral is a serverless data platform that manages metadata changes, data transformations, and orchestrating pipelines for data consumers. In this post, learn how to write Redshift SQL to represent data flow, and how serverless data pipelines get automatically generated for that data flow.

Heimdall-Data-AWS-Partners

Using the Heimdall Proxy to Split Reads and Writes for Amazon Aurora and Amazon RDS

Horizontally scaling a SQL database involves separating the write-master from read-only servers. This allows the write server to perform dedicated write operations rather than processing redundant read queries. However, writing to one node and reading from another can result in inconsistent data due to synchronization delays. Heimdall Data offers a database proxy to help developers and architects achieve optimal scale from their Amazon RDS and Amazon Aurora environment without any application changes.

Cognizant_AWS Solutions

Accelerating Data Warehouse Migration to Amazon Redshift Using Cognizant Intelligent Data Works

Many organizations are looking to migrate existing, on-premises enterprise data warehouse systems to cloud-based data warehouse systems such as Amazon Redshift. Here, we discuss how Cognizant’s Intelligent Migration Workbench (IMW) can be used to accelerate the data warehouse migrations while converting Oracle PL/SQL and Tetradata BTEQ scripts. IMW makes it easy to move mission critical proprietary code to AWS, giving customers competitive edge through faster time to market.