AWS Partner Network (APN) Blog

Category: AWS Big Data

Vcinity-APN-Blog-013124

Establishing a Continuous Data Pipeline with Vcinity on AWS

Vcinity’s data movement and remote data access solutions enable enterprises to build continuous data pipelines that provide secure, performant access to distributed data. By extending high-speed networking protocols over wide area networks, Vcinity allows AWS services to operate on remote data as if it were local, reducing data transfer costs and latency. This enables real-time analytics, AI/ML model training, cloud migrations, and other use cases.

Alation-APN-Blog-010424

Creating a Secure Data Catalog with Alation Cloud Services and AWS PrivateLink

AWS PrivateLink allows customers to securely connect cloud and on-premises data sources to Alation’s data catalog without exposing traffic to the public internet. This integration provides private connectivity between the customer’s VPC and Alation Cloud Service and simplifies network architecture. Using PrivateLink with Alation enables organizations to build a catalog of metadata from selected data assets while maintaining compliance with security and regulatory requirements.

Snowflake-APN-Blog-101623

Implementing a Snowflake-Centric Data Mesh on AWS for Scalability and Autonomy

A data mesh architecture is a relatively new approach to managing data in large organizations, aimed at improving scalability, agility, and autonomy of data teams. There’s a need for an architecture that removes complexity and friction of provisioning and managing the lifecycle of data. This post outlines an approach to implement a data mesh with Snowflake as the data platform and with many AWS services like to support all pillars of the data mesh architecture.

Palantir-APN-Blog-100523

Implementing an Operational Data Mesh with Palantir Foundry on AWS to Transform Your Organization

Data architectures and strategies continue to respond to the need for discoverability and consumer desire to directly connect with producers. Data mesh is one such approach and provides a methodology for how organizations can organize around data domains by delivering data as a product. Learn how Palantir Foundry runs on AWS to help customers deliver and transform their data architectures through such an approach while leveraging and building on existing investments.

­­­How Accenture Accelerates Building Enterprise Data Mesh Architecture on AWS

Data mesh is a decentralized approach to data management which strives to develop the data platform from a technology-led and project-centric model into a paradigm about federated business-led and product-centric data, by design. Learn how AWS and Accenture are helping customers rapidly set up data mesh architecture on AWS leveraging the newly-announced Velocity platform. Explore how Velocity’s Data Mesh Fabric component can minimize time and effort to set up data mesh architecture on AWS.

Data Ingestion in a Multi-Tenant SaaS Environment Using AWS Services

AWS experts break down how you can build a multi-tenant data ingestion and processing engine using AWS services. We examine each component of this data pipeline and examine some of the key considerations that can influence how you approach designing a SaaS multi-tenant data ingestion process. We also explore how multi-tenant streaming data can be ingested, transformed, and stored using AWS services while ensuring there are constructs built in to the pipeline to ensure secure processing of the data.

Fivetran-APN-Blog-080423

Building a Modern Data Lake with Fivetran and Amazon S3 to Accelerate Data-Driven Success

Many organizations are adopting data lakes to handle large volumes of data, and flexible pipelines to fit the needs of consuming services and teams (machine learning, business intelligence, and analytics). In this post, we’ll explore the modern data lake and how Fivetran can help accelerate time-to-value with Amazon S3 and Apache Iceberg. Fivetran offers pre-built connectors for 300+ data sources and employs ETL to land data in the warehouse or data lake.

N-iX-APN-Blog-021023

How N-iX Developed an End-to-End Big Data Platform on AWS for Gogo

Gogo is a global provider of broadband connectivity products and services for business aviation. It needed a qualified engineering team to undertake a complete transition of its solutions to the cloud, build a unified data platform, and streamline the best speed of the inflight internet. Learn how N-iX developed the data platform on AWS that aggregates data from over 20 different sources using Apache Spark on Amazon EMR.

Cazena-AWS-Partners

Cazena’s Instant AWS Data Lake: Accelerating Time to Analytics from Months to Minutes

Given the breadth of use cases, data lakes need to be a complete analytical environment with a variety of analytical tools, engines, and languages supporting a variety of workloads. These include traditional analytics, business intelligence, streaming event and Internet of Things (IoT) processing, advanced machine learning, and artificial intelligence processing. Learn how Cazena builds and deploys a production ready data lake in minutes for customers.

Qlik-AWS-Partners

How to Unleash Mainframe Data with AWS and Qlik Replicate

Historically, mainframes have hosted core-business processes, applications, and data, all of which are blocked in these rigid and expensive systems. AWS and Qlik can liberate mainframe data in real-time, enabling customers to exploit its full business value for data lakes, analytics, innovation, or modernization purposes. In this post, we describe how customers use Qlik Replicate real-time data streaming to put mainframe core-business data onto AWS.