AWS Partner Network (APN) Blog

Category: AWS Big Data


Guiding Clinical Trial Decision-making with Reliable Clinical Insights: How BluMaiden Biosciences Brings its Pharma Service to Global Clients through AWS

Therapeutic development relies on Pharma-led clinical trials with robust endpoint data. Key decision points like patient enrichment and endpoint optimization shape trial success. Advanced analytics extract insights, aiding targeted drug development and saving time and money. BluMaiden Bioscience’s AWS platform tackles data challenges, delivering valuable clinical insights globally for efficient trial design and execution.

Unlocking the Power of Customer Data: How Caylent and AWS Modernized an Analytics Pipeline

By Israel Mendes, Data Engineering leader – Caylent By Washim Nawaz, Analytics Specialist – AWS By Isaac Owusu-Hemeng, Customer Solutions Manager – AWS By Muz Syed, Sr. Partner Solutions Architect – AWS Caylent  In this post, we will discuss how APN Premier partner Caylent leveraged a 3-Day Experience Based Acceleration (EBA) workshop to finalize and […]


Establishing a Continuous Data Pipeline with Vcinity on AWS

Vcinity’s data movement and remote data access solutions enable enterprises to build continuous data pipelines that provide secure, performant access to distributed data. By extending high-speed networking protocols over wide area networks, Vcinity allows AWS services to operate on remote data as if it were local, reducing data transfer costs and latency. This enables real-time analytics, AI/ML model training, cloud migrations, and other use cases.


Creating a Secure Data Catalog with Alation Cloud Services and AWS PrivateLink

AWS PrivateLink allows customers to securely connect cloud and on-premises data sources to Alation’s data catalog without exposing traffic to the public internet. This integration provides private connectivity between the customer’s VPC and Alation Cloud Service and simplifies network architecture. Using PrivateLink with Alation enables organizations to build a catalog of metadata from selected data assets while maintaining compliance with security and regulatory requirements.


Implementing a Snowflake-Centric Data Mesh on AWS for Scalability and Autonomy

A data mesh architecture is a relatively new approach to managing data in large organizations, aimed at improving scalability, agility, and autonomy of data teams. There’s a need for an architecture that removes complexity and friction of provisioning and managing the lifecycle of data. This post outlines an approach to implement a data mesh with Snowflake as the data platform and with many AWS services like to support all pillars of the data mesh architecture.


Implementing an Operational Data Mesh with Palantir Foundry on AWS to Transform Your Organization

Data architectures and strategies continue to respond to the need for discoverability and consumer desire to directly connect with producers. Data mesh is one such approach and provides a methodology for how organizations can organize around data domains by delivering data as a product. Learn how Palantir Foundry runs on AWS to help customers deliver and transform their data architectures through such an approach while leveraging and building on existing investments.

­­­How Accenture Accelerates Building Enterprise Data Mesh Architecture on AWS

Data mesh is a decentralized approach to data management which strives to develop the data platform from a technology-led and project-centric model into a paradigm about federated business-led and product-centric data, by design. Learn how AWS and Accenture are helping customers rapidly set up data mesh architecture on AWS leveraging the newly-announced Velocity platform. Explore how Velocity’s Data Mesh Fabric component can minimize time and effort to set up data mesh architecture on AWS.

Data Ingestion in a Multi-Tenant SaaS Environment Using AWS Services

AWS experts break down how you can build a multi-tenant data ingestion and processing engine using AWS services. We examine each component of this data pipeline and examine some of the key considerations that can influence how you approach designing a SaaS multi-tenant data ingestion process. We also explore how multi-tenant streaming data can be ingested, transformed, and stored using AWS services while ensuring there are constructs built in to the pipeline to ensure secure processing of the data.


Building a Modern Data Lake with Fivetran and Amazon S3 to Accelerate Data-Driven Success

Many organizations are adopting data lakes to handle large volumes of data, and flexible pipelines to fit the needs of consuming services and teams (machine learning, business intelligence, and analytics). In this post, we’ll explore the modern data lake and how Fivetran can help accelerate time-to-value with Amazon S3 and Apache Iceberg. Fivetran offers pre-built connectors for 300+ data sources and employs ETL to land data in the warehouse or data lake.


How N-iX Developed an End-to-End Big Data Platform on AWS for Gogo

Gogo is a global provider of broadband connectivity products and services for business aviation. It needed a qualified engineering team to undertake a complete transition of its solutions to the cloud, build a unified data platform, and streamline the best speed of the inflight internet. Learn how N-iX developed the data platform on AWS that aggregates data from over 20 different sources using Apache Spark on Amazon EMR.