AWS Partner Network (APN) Blog
Category: Artificial Intelligence
Reimagining Vector Databases for the Generative AI Era with Pinecone Serverless on AWS
Pinecone has developed a novel serverless vector database architecture optimized for AI workloads like retrieval-augmented generation. Built on AWS, it decouples storage and compute and enables efficient intermittent querying of large datasets. This provides elasticity, fresher data, and major cost savings over traditional architectures. Pinecone serverless removes bottlenecks to building more knowledgeable AI applications economically at scale on AWS.
Boost Chip Design with AI: How Synopsys DSO.ai on AWS Delivers Lower Power and Faster Time-to-Market
The Synopsys.ai electronic design automation (EDA) suite on AWS harnesses AI to optimize chip design. A key component is Synopsys DSO.ai, using reinforcement learning to enhance power, performance, and area. Benefits include faster optimization, better engineer productivity, design reuse, and faster process node migration. Deploying on AWS ParallelCluster provides auto-scaling, elasticity, and fast setup for massive EDA workloads. Testing showed 20% lower power, timing closure improvements, and significant cost savings.
How Accenture’s CCE Solution Powered by AWS Generative AI Helps Improve Customer Experience
Contact centers can improve customer experiences using generative AI, which creates new content and conversations. Accenture’s Connected Customer Experience (CCE) solution incorporates AWS services to provide personalized human and AI interactions. It uses generative AI for agent assist, call summarization, and self-service FAQs. By leveraging generative AI on AWS, CCE aims to enhance agent productivity, reduce handle times, and deliver exceptional customer experiences.
Getting Started with Generative AI Using Hugging Face Platform on AWS
The Hugging Face Platform provides no-code and low-code solutions for deploying generative AI models on managed AWS infrastructure. Key features include Inference Endpoints for easy model deployment, Spaces for hosting machine learning apps, and AutoTrain for training state-of-the-art models without coding. Hugging Face is an AWS Generative AI Competency Partner whose mission is to democratize machine learning through open source, open science, and Hugging Face products and services.
New Generative AI Insights for AWS Partners to Accelerate Your Customer Offerings
AWS embraces the “working backwards” approach to stay customer-focused. The Generative AI Center of Excellence (CoE) for AWS Partners applies this methodology and collects partner feedback to provide relevant insights, tools, and resources on leveraging generative AI. Recent updates to the CoE include customer research on generative AI adoption challenges, a usage maturity heatmap by industry, and five new use case deep dives covering telecom, automotive, IDP, contact centers, and financial analysts.
Revolutionize Your Business with AWS Generative AI Competency Partners
With the ability to automate tasks, enhance productivity, and enable hyper-personalized customer experiences, businesses are seeking specialized expertise to build a successful generative AI strategy. To support this need, we’re excited to announce the AWS Generative AI Competency—an AWS Specialization that helps AWS customers more quickly adopt generative AI solutions and strategically position themselves for the future.
Enabling ESG Compliance with the KewSustain AI-Powered Sustainability Platform
KewSustain is an AI platform by KewMann built on AWS that addresses gaps in sustainability compliance. It efficiently gathers ESG data from multiple sources, analyzes it, and generates sustainability reports adhering to major frameworks. Key features include automated data collection, collaboration tools, recommendations based on various risk factors, real-time insights via dashboards, and a virtual assistant providing guidance on reporting requirements.
How to Deploy Amazon Translate Spoke in ServiceNow for Language Detection and Translation
ServiceNow and AWS have collaborated to bridge language barriers in global workforces. Using AWS services like Amazon Translate and Amazon Comprehend, the AWS Translate Spoke for ServiceNow Flow Designer enables automatic translation of text into employees’ native languages. By demonstrating how the AWS Translate Spoke can translate knowledge articles, this post explains how ServiceNow customers can easily build multi-language workflows to serve global users.
How to Use Amazon SageMaker Pipelines MLOps with Gretel Synthetic Data
Generating high-quality synthetic data protects privacy and augments scarce real-world data for training machine learning models. This post shows how to integrate the Gretel synthetic data platform with Amazon SageMaker Pipelines for a full ML workflow. Gretel’s integration with SageMaker Pipelines in a hybrid or fully managed cloud environment enables responsible and robust adoption of AI while optimizing model accuracy. With Gretel, data scientists can overcome data scarcity without compromising individuals’ privacy.
Why SuccessKPI’s Use of Sentiment Analysis is Transformative for Customer Experiences
Companies often struggle to understand customer sentiment in call center data, posing barriers to responding to emotions in real-time. SuccessKPI uses natural language understanding and machine learning on a large dataset to enable sentiment prediction. This helps avoid bias of human reviewers and attain objective results when analyzing customer feedback. SuccessKPI offers capabilities like sentiment by channel, time, quarter, and entity to transform customer experiences.