AWS Partner Network (APN) Blog
Category: AWS Big Data
How N-iX Developed an End-to-End Big Data Platform on AWS for Gogo
Gogo is a global provider of broadband connectivity products and services for business aviation. It needed a qualified engineering team to undertake a complete transition of its solutions to the cloud, build a unified data platform, and streamline the best speed of the inflight internet. Learn how N-iX developed the data platform on AWS that aggregates data from over 20 different sources using Apache Spark on Amazon EMR.
Cazena’s Instant AWS Data Lake: Accelerating Time to Analytics from Months to Minutes
Given the breadth of use cases, data lakes need to be a complete analytical environment with a variety of analytical tools, engines, and languages supporting a variety of workloads. These include traditional analytics, business intelligence, streaming event and Internet of Things (IoT) processing, advanced machine learning, and artificial intelligence processing. Learn how Cazena builds and deploys a production ready data lake in minutes for customers.
How to Unleash Mainframe Data with AWS and Qlik Replicate
Historically, mainframes have hosted core-business processes, applications, and data, all of which are blocked in these rigid and expensive systems. AWS and Qlik can liberate mainframe data in real-time, enabling customers to exploit its full business value for data lakes, analytics, innovation, or modernization purposes. In this post, we describe how customers use Qlik Replicate real-time data streaming to put mainframe core-business data onto AWS.
In-Depth Strategies for Building a Scalable, Multi-Tenant SaaS Solution with Amazon Redshift
Software-as-a-Service (SaaS) presents developers and architects with a unique set of challenges. One essential decision you’ll have to make is how to partition data for each tenant of your system. Learn how to harness Amazon Redshift to build a scalable, multi-tenant SaaS solution on AWS. This post explores trategies that are commonly used to partition and isolate tenant data in a SaaS environment, and how to apply them in Amazon Redshift.
Accelerating Apache and Hadoop Migrations with Cazena’s Data Lake as a Service on AWS
Running Hadoop, Spark, and related technologies in the cloud provides the flexibility required by these distributed systems. Cazena provides a production-ready, continuously optimized and secured Data Lake as a Service with multiple features that enables migration of Hadoop and Spark analytics workloads to AWS without the need for specialized skills. Learn how Cazena makes it easy to migrate to AWS while ensuring your data is as secure on the cloud as it is on-premises.
Maximizing the Value of Your Cloud-Enabled Enterprise Data Lake by Tracking Critical Metrics
Successful data lake implementations can serve a corporation well for years. Accenture, an APN Premier Consulting Partner, recently had an engagement with a Fortune 500 company that wanted to optimize its AWS data lake implementation. As part of the engagement, Accenture moved the customer to better-suited services and developed metrics to closely monitor the health of its overall environment in the cloud.
Turning Data into a Key Enterprise Asset with a Governed Data Lake on AWS
Data and analytics success relies on providing analysts and data end users with quick, easy access to accurate, quality data. Enterprises need a high performing and cost-efficient data architecture that supports demand for data access, while providing the data governance and management capabilities required by IT. Data management excellence, which is best achieved via a data lake on AWS, captures and makes quality data available to analysts in a fast and cost-effective way.
MongoDB Atlas Data Lake Lets Developers Create Value from Rich Modern Data
With the proliferation of cost-effective storage options such as Amazon S3, there should be no reason you can’t keep your data forever, except that with this much data it can be difficult to create value in a timely and efficient way. MongoDB’s Atlas Data Lake enables developers to mine their data for insights with more storage options and the speed and agility of the AWS Cloud. It provides a serverless parallelized compute platform that gives you a powerful and flexible way to analyze and explore your data on Amazon S3.
How to Create a Continually Refreshed Amazon S3 Data Lake in Just One Day
Data management architectures have evolved drastically from the traditional data warehousing model, to today’s more flexible systems that use pay-as-you-go cloud computing models for big data workloads. Learn how AWS services like Amazon EMR can be used with Bryte Systems to deploy an Amazon S3 data lake in one day. We’ll also detail how AWS and the BryteFlow solution can automate modern data architecture to significantly accelerate delivery and business insights at scale.
Leveraging Multi-Model Architecture to Deliver Rich Customer Relationship Profiles with Reltio Cloud
Building a true Customer 360 requires gaining a comprehensive view of customer behavior and preferences by aggregating data from all of these sources, and more. With a single source of truth, a true Customer 360 delivers complete a real-time customer view to all parts of the organization—sales, marketing, service, support, etc. This consistent and contextual insight can help enterprises delight customers with personalized experience and timely offers through each touch point in the customer journey.