AWS Partner Network (APN) Blog
Category: Generative AI
Breaking Language Boundaries: Multilingual GenAI Solutions with Amazon Bedrock
By Kelvin Kok, Chief Architect & CEO – Axrail By Cara Lee, Account Manager – Axrail By Brendan Child, Sr. AI/ML Partner Development Specialist – AWS By Vasileios Vonikakis, Sr. AI/ML Partner Solution Architect – AWS Axrail Introduction Generative AI enhanced solutions, such as virtual assistants and enterprise search, have already demonstrated their potential in […]
Neurons Lab – Transforming Cybersecurity Audits with Generative AI on AWS
In this blog, you’ll learn how Neurons Lab, an AWS Advanced Tier Services Partner, collaborated with Peak Defence to automate compliance processes using Amazon Bedrock with Anthropic Claude 3 model and Amazon Sagemaker. The generative AI solution streamlines cybersecurity audits and RFP responses, reducing time and resources required. It covers architectural considerations, operationalization with AWS services, LLM evaluation, and continuous improvement.
Wipro applying Data, AI/ML and Generative AI to the Telecom Industry
By Vanitha Jayasuriya, Full Stride Cloud Solution Lead, Germany – Wipro By Yedu Kuruvath, Alliances and Partner Development Lead, AI Practice – Wipro By Shaban Saddique, AWS Business Group Director, Europe – Wipro By Benson Philip, Sr Partner Development Manager, EMEA – AWS By Bindhu Chinnadurai, Sr Partner Solutions Architect, EMEA – AWS Wipro Introduction […]
Transforming Business Experiences: The Impact of Amazon Q and Generative BI for AWS Partners
Amazon Q and generative BI are transforming business operations, and AWS Partners like ZS Associates, Tiger Analytics, and Compass UOL are pioneering use cases leveraging these technologies to build industry-tailored solutions that improve decision making, operations, fraud detection, software development lifecycles, and more. AWS provides resources to help partners develop and deploy such transformative generative AI offerings.
How Datasaur Reimagines Data Labeling Tasks Using Generative AI on AWS
Generative AI adoption is rapidly growing to meet the massive data needs of modern machine learning models. Manually labeling data can be time-consuming but AWS has collaborated with Datasaur to offer solutions addressing data labeling challenges using generative AI. Datasaur’s NLP Platform automates annotation tasks and integrates with AWS services, and its LLM Labs evaluates large language models’ performance and cost for labeling.
Best Practices from Quantiphi for Unleashing Generative AI Functionality by Fine-Tuning LLMs
Fine-tuning large language models (LLMs) is crucial for leveraging their full potential across industries. Quantiphi unveils how fine-tuning supercharges LLMs to deliver domain-specific AI solutions that redefine possibilities. From personalized healthcare to precise financial predictions and streamlined legal reviews, fine-tuned models offer transformative value and unleash the power of customized, efficient, and responsible generative AI deployments.
Reimagining Vector Databases for the Generative AI Era with Pinecone Serverless on AWS
Pinecone has developed a novel serverless vector database architecture optimized for AI workloads like retrieval-augmented generation. Built on AWS, it decouples storage and compute and enables efficient intermittent querying of large datasets. This provides elasticity, fresher data, and major cost savings over traditional architectures. Pinecone serverless removes bottlenecks to building more knowledgeable AI applications economically at scale on AWS.
How Accenture’s CCE Solution Powered by AWS Generative AI Helps Improve Customer Experience
Contact centers can improve customer experiences using generative AI, which creates new content and conversations. Accenture’s Connected Customer Experience (CCE) solution incorporates AWS services to provide personalized human and AI interactions. It uses generative AI for agent assist, call summarization, and self-service FAQs. By leveraging generative AI on AWS, CCE aims to enhance agent productivity, reduce handle times, and deliver exceptional customer experiences.
Getting Started with Generative AI Using Hugging Face Platform on AWS
The Hugging Face Platform provides no-code and low-code solutions for deploying generative AI models on managed AWS infrastructure. Key features include Inference Endpoints for easy model deployment, Spaces for hosting machine learning apps, and AutoTrain for training state-of-the-art models without coding. Hugging Face is an AWS Generative AI Competency Partner whose mission is to democratize machine learning through open source, open science, and Hugging Face products and services.
New Generative AI Insights for AWS Partners to Accelerate Your Customer Offerings
AWS embraces the “working backwards” approach to stay customer-focused. The Generative AI Center of Excellence (CoE) for AWS Partners applies this methodology and collects partner feedback to provide relevant insights, tools, and resources on leveraging generative AI. Recent updates to the CoE include customer research on generative AI adoption challenges, a usage maturity heatmap by industry, and five new use case deep dives covering telecom, automotive, IDP, contact centers, and financial analysts.