AWS Partner Network (APN) Blog

Category: Analytics

Onica-AWS-Partners

Best Practices from Onica for Optimizing Query Performance on Amazon Redshift

Effective and economical use of data is critical to your success. As data volumes increase exponentially, managing and extracting value from data becomes increasingly difficult. By adopting best practices that Onica has developed over years of using Amazon Redshift, you can improve the performance of your AWS data warehouse implementation. Onica has completed multiple projects ranging from assessing the current state of an Amazon Redshift cluster to helping tune, optimize, and deploy new clusters.

Training Multiple Machine Learning Models Simultaneously Using Spark and Apache Arrow

Spark is a distributed computing framework that added new features like Pandas UDF by using PyArrow. You can leverage Spark for distributed and advanced machine learning model lifecycle capabilities to build massive-scale products with a bunch of models in production. Learn how Perion Network implemented a model lifecycle capability to distribute the training and testing stages with few lines of PySpark code. This capability improved the performance and accuracy of Perion’s ML models.

Tableau-AWS-Partners

Analyzing COVID-19 Data with AWS Data Exchange, Amazon Redshift, and Tableau 

To help everyone visualize COVID-19 data confidently and responsibly, we brought together APN Partners Salesforce, Tableau, and MuleSoft to create a centralized repository of trusted data from open source COVID-19 data providers. Anyone can work with the public data, blend it with their own data, or subscribe to the source datasets directly through AWS Data Exchange, and then use Amazon Redshift together with Tableau to better understand the impact on their organization.

Mactores-AWS-Partners

How Mactores Tripled Performance by Migrating from Oracle to Amazon Redshift with Zero Downtime

Mactores used a five-step approach to migrate, with zero downtime, a large manufacturing company from an Oracle on-premises data warehouse to Amazon Redshift. The result was lower total cost of ownership and triple the performance for dependent business processes and reports. The migration tripled the customer’s performance of reports, dashboards, and business processes, and lowered TCO by 30 percent. Data refresh rates dropped from 48 hours to three hours.

Next-Caller-AWS-Partners

Building a Data Processing and Training Pipeline with Amazon SageMaker

Next Caller uses machine learning on AWS to drive data analysis and the processing pipeline. Amazon SageMaker helps Next Caller understand call pathways through the telephone network, rendering analysis in approximately 125 milliseconds with the VeriCall analysis engine. VeriCall verifies that a phone call is coming from the physical device that owns the phone number, and flags spoofed calls and other suspicious interactions in real-time.

Q/Kdb+ on AWS Lambda: Serverless Time-Series Analytics at Scale

AWS Lambda is a particularly desirable environment for HPC applications because of the high level of parallelization it supports. Kx, an APN Advanced Technology Partner, created a q/kdb+ runtime that enables financial institutions to optimize their applications for the serverless environment of AWS Lambda. Q/kdb+ has been widely adopted by the financial services industry because of its small footprint, high performance, and high volume time-series analytics capabilities.

Monitoring Your Palo Alto Networks VM-Series Firewall with a Syslog Sidecar

By hosting a Palo Alto Networks VM-Series firewall in an Amazon VPC, you can use AWS native cloud services—such as Amazon CloudWatch, Amazon Kinesis Data Streams, and AWS Lambda—to monitor your firewall for changes in configuration. This post explains why that’s desirable and walks you through the steps required to do it. You now have a way to monitor your Palo Alto Networks firewall that is very similar to how you monitor your AWS environment with AWS Config.

Machine Learning-3

Accelerating Machine Learning with Qubole and Amazon SageMaker Integration

Data scientists creating enterprise machine learning models to process large volumes of data spend a significant portion of their time managing the infrastructure required to process the data, rather than exploring the data and building ML models. You can reduce this overhead by running Qubole data processing tools and Amazon SageMaker. An open data lake platform, Qubole automates the administration and management of your resources on AWS.

Teradata-AWS-Partners

How to Use AWS Glue to Prepare and Load Amazon S3 Data for Analysis by Teradata Vantage

Customers want to use Teradata Vantage to analyze the data they have stored in Amazon S3, but the AWS service that prepares and loads data stored in S3 for analytics, AWS Glue, does not natively support Teradata Vantage. To use AWS Glue to prep and load data for analysis by Teradata Vantage, you need to rely on AWS Glue custom database connectors. Follow step-by-step instructions and learn how to set up Vantage and AWS Glue to perform Teradata-level analytics on the data you have stored in Amazon S3.

Running SQL on Amazon Athena to Analyze Big Data Quickly and Across Regions

Data is the lifeblood of a digital business and a key competitive advantage for many companies holding large amounts of data in multiple cloud regions. Imperva protects web applications and data assets, and in this post we examine how you can use SQL to analyze big data directly, or to pre-process the data for further analysis by machine learning. You’ll also learn about the benefits and limitations of using SQL, and see examples of clustering and data extraction.