AWS Partner Network (APN) Blog
Category: Amazon SageMaker
How Datasaur Reimagines Data Labeling Tasks Using Generative AI on AWS
Generative AI adoption is rapidly growing to meet the massive data needs of modern machine learning models. Manually labeling data can be time-consuming but AWS has collaborated with Datasaur to offer solutions addressing data labeling challenges using generative AI. Datasaur’s NLP Platform automates annotation tasks and integrates with AWS services, and its LLM Labs evaluates large language models’ performance and cost for labeling.
Accelerating the Modern Manufacturing Transformation with LTIMindtree’s Digital Command Center on AWS
In today’s industrial landscape, smart manufacturing is pivotal for sustainability and efficiency gains. However, organizations face challenges in gathering and consolidating data from multiple, often outdated sources on the manufacturing floor. LTIMindtree’s AWS-powered Digital Command Center (DCC) solution addresses this by enabling data acquisition, management, real-time KPI monitoring, and analytics-driven insights.
Optimize Customer Journey with a Bird’s Eye View of Customer Interactions from Joulica
Contact centers often face challenges due to lack of visibility into customers’ omnichannel experiences. Joulica’s Customer Journey Analytics solution, part of AWS Contact Center Intelligence, provides a unified, real-time view of each customer’s journey across voice, digital, and social interactions. Built on AWS data streaming architecture, it empowers agents with holistic customer understanding and enhances customer satisfaction and brand perception through optimized experiences.
Best Practices from Quantiphi for Unleashing Generative AI Functionality by Fine-Tuning LLMs
Fine-tuning large language models (LLMs) is crucial for leveraging their full potential across industries. Quantiphi unveils how fine-tuning supercharges LLMs to deliver domain-specific AI solutions that redefine possibilities. From personalized healthcare to precise financial predictions and streamlined legal reviews, fine-tuned models offer transformative value and unleash the power of customized, efficient, and responsible generative AI deployments.
How to Use Amazon SageMaker Pipelines MLOps with Gretel Synthetic Data
Generating high-quality synthetic data protects privacy and augments scarce real-world data for training machine learning models. This post shows how to integrate the Gretel synthetic data platform with Amazon SageMaker Pipelines for a full ML workflow. Gretel’s integration with SageMaker Pipelines in a hybrid or fully managed cloud environment enables responsible and robust adoption of AI while optimizing model accuracy. With Gretel, data scientists can overcome data scarcity without compromising individuals’ privacy.
Automate Labeling for Intelligent Document Processing with Cognizant and Amazon SageMaker Ground Truth
Intelligent document processing (IDP) automates data extraction from diverse document formats, accelerating information retrieval. Manually labeling is expensive and difficult, and Cognizant’s IDP solution on AWS automates document labeling at scale to overcome this challenge. Its customized user interface in Amazon SageMaker Ground Truth lets subject matter experts efficiently label documents.
How Shellkode Uses Amazon Bedrock to Convert Natural Language Queries to NoSQL Statements
Large language models like Amazon Bedrock can generate MongoDB queries from natural language questions, transforming how users access NoSQL databases. By leveraging AI and language models, this solution allows business users to query MongoDB data through conversational English instead of code. It connects to MongoDB with PyMongo, generates queries with LangChain and Bedrock, retrieves and formats results into natural language answers.
How Startups Can Fast-Track Their AWS Machine Learning Journey with Automat-IT’s MLOps Accelerator
Many startups want to use machine learning but struggle with developing scalable MLOps pipelines. Automat-IT’s MLOps Accelerator helps startups fast-track their machine learning journey and provides an end-to-end automated solution for the ML lifecycle, from data preparation to deployment, leveraging AWS services. With customizable pipelines and dedicated ML experts, Automat-IT empowers various roles to develop, operationalize, and monitor models efficiently.
Reducing Inference Times by 87% for Darwinbox’s Talent Search Engine Using AWS Inferentia
Darwinbox wanted to reduce the time to infer resumes against job descriptions using PyTorch models. AWS Premier Partner Minfy helped them leverage Amazon SageMaker and AWS Inferentia to compile models with Neuron SDK and deploy them, achieving 87% faster inference without retraining. Key steps were compiling models with the Neuron SDK, extending SageMaker containers, using Inference Recommender to optimize configurations, and sending requests in mini-batches.
The Future of Search: Exploring Generative AI Chat-Based Solutions with AWS and Slalom
In a recent webinar, Slalom and AWS showcased the incredible potential of chat-based enterprise search powered by AWS generative AI services like Amazon Bedrock. We’re excited to share key takeaways and a more in-depth exploration of the transformative landscape that chat-based search creates. Learn how technologies like Amazon Bedrock empower businesses to build intelligent chat-based interfaces that allow employees to interact with company data conversationally.