Skip to main content
2024

Workday Accelerates Generative AI & ML Product Development Using Amazon SageMaker

Learn how Workday fuels engineering productivity by using Amazon SageMaker.

Overview

Workday Inc. (Workday), a leading provider of solutions that help organizations manage their people and money, is highly focused on putting its engineering effort toward developing products that have built-in artificial intelligence (AI) capabilities. To help free its engineers from infrastructure maintenance, Workday adopted Amazon SageMaker, a fully managed service that helps its teams build, train, and deploy machine learning (ML) models for any use case. By using AWS services, Workday’s engineering teams can rapidly iterate and deploy complex models, including large language models (LLMs), to production.

Colleagues working with device and collaborating at office

About Workday

More than 10,000 organizations worldwide rely on Workday to manage their most valuable assets—people and money. Workday provides customers with efficient financial and human resources solutions that help facilitate decision-making and performance.

Opportunity | Using AWS Regions to Meet Data Residency Requirements for Workday’s Global Customers

Workday offers software solutions that help its customers make accurate decisions and drive performance across human resources planning, financial planning, supply chain management, and other areas of their operations. For years, Workday has been investing in AI to help its customers make the most of their operational data with AI/ML-driven insights. “We consider ML a core backend technology for Workday,” says Shane Luke, head of Workday AI. “Our goal is to make AI-based solutions that provide our customers with real value.”

Because the company serves a global customer base, Workday needs to run its ML inference in alignment with its customers’ data residency requirements. “We have customers who are very sensitive,” says Luke. “We came to the realization that we needed a federated, distributed system that could run in many regions.” While building out a backend for its ML, the company wanted to avoid investing in its own regional private clouds.

Workday’s teams found that they can run their workloads in the AWS Region of their choice, which has supported the company’s business growth. “Our global expansion has been done on AWS,” says Luke. “It really has been a key point for us. We can deliver regionality to customers based in Europe, the Middle East, and Asia. For us, that’s been a major win.”

“Using AWS, we’ve gone from scaling to a thousand inference requests to tens of millions that are coming in daily,” says Luke. “It’s been very rewarding to see.” Further, the company has been able to scale with virtually no downtime.

Solution | Improving Inference Latency by Five Times Using Amazon SageMaker

For its generative AI use cases, Workday uses Amazon SageMaker to simplify searching, evaluating, customizing, and deploying LLMs. “Workday has been an early adopter of LLMs, and we are actively building new generative AI capabilities that will help our customers increase productivity, grow, retain talent, streamline business processes, and drive better decision-making,” says Eddie Raffaele, vice president of Workday AI. “Workday can quickly tap into the power of generative AI and realize its value by bringing the best solutions to customers safely and responsibly.”

To support collaboration across its global teams, Workday provides its engineers access to Amazon SageMaker Studio, a web-based, integrated development environment for ML. Workday’s engineers can then compare and evaluate new foundation models by using Amazon SageMaker Jumpstart, an ML hub with foundation models, built-in algorithms, and prebuilt ML solutions. “For tasks such as creating job descriptions, which must be high quality, we use the model evaluation capability in Amazon SageMaker and select the best foundation model that reflects our company’s priorities and metrics in a responsible way,” says Luke.

Workday’s engineering team has also adopted Amazon SageMaker Ground Truth Plus, which applies human feedback across the ML lifecycle to create and evaluate high-quality models. The team has used this solution across eight labeling use cases, including named entity recognition, entity linking, sentiment and theme analysis, and more. “There’s a lot of labeling and annotating that is needed to manage our LLM outputs and receive high-quality data within our guaranteed SLAs,” says Luke. “Amazon SageMaker Ground Truth Plus has become an intrinsic part of our LLMs.”

Next, its engineers can fine-tune their LLMs with high-quality data by using Amazon SageMaker Notebook Instances to prepare and process the data to train their LLM models. Workday’s engineers then deploy their models for inference to achieve optimal performance and costs while reducing operational burden. For example, Workday used Amazon SageMaker to pilot a closed-book ML application that could analyze job descriptions, invoices, and contracts. During this pilot, Workday saw its ML inference latency improve by a factor of five.

Workday also uses LLMs to power friendly, personalized reminders that help its customers stay on track with their project and organizational goals. “There are more than 13,000 tasks available through Workday,” says Luke. “We’ve built and trained an ML model for a tenant that delivers the three top task recommendations based on the user’s activity.” With these tools at their fingertips, Workday’s customers can maximize their operational efficiency and prioritize projects with data-driven insights.

Outcome | Experimenting with Generative AI Using Amazon Bedrock

Workday received early access to Amazon Bedrock, a service that provides the simplest way to build and scale generative AI applications with foundation models. Workday uses Amazon Bedrock to facilitate product prototyping and test multibillion-parameter ML models. “We’re able to rapidly experiment and identify which AI capabilities we should invest in and put in front of our customers,” says Luke.

The Workday team is also working toward immediate deployment of new features for its customers instead of rolling out features one region at a time. “We’re pleased with the flexibility that AWS has given us,” says Luke. “We can deliver value to our customers and scale horizontally.”

Missing alt text value
Using AWS, we’ve gone from scaling to a thousand inference requests to tens of millions that are coming in daily. It’s been very rewarding to see.

Shane Luke

Head of Workday AI

Get Started

Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.
Contact Sales