AWS Machine Learning Blog

Slack delivers native and secure generative AI powered by Amazon SageMaker JumpStart

This post is co-authored by Jackie Rocca, VP of Product, AI at Slack

Slack is where work happens. It’s the AI-powered platform for work that connects people, conversations, apps, and systems together in one place. With the newly launched Slack AI—a trusted, native, generative artificial intelligence (AI) experience available directly in Slack—users can surface and prioritize information so they can find their focus and do their most productive work.

We are excited to announce that Slack, a Salesforce company, has collaborated with Amazon SageMaker JumpStart to power Slack AI’s initial search and summarization features and provide safeguards for Slack to use large language models (LLMs) more securely. Slack worked with SageMaker JumpStart to host industry-leading third-party LLMs so that data is not shared with the infrastructure owned by third party model providers.

This keeps customer data in Slack at all times and upholds the same security practices and compliance standards that customers expect from Slack itself. Slack is also using Amazon SageMaker inference capabilities for advanced routing strategies to scale the solution to customers with optimal performance, latency, and throughput.

“With Amazon SageMaker JumpStart, Slack can access state-of-the-art foundation models to power Slack AI, while prioritizing security and privacy. Slack customers can now search smarter, summarize conversations instantly, and be at their most productive.”

– Jackie Rocca, VP Product, AI at Slack

Foundation models in SageMaker JumpStart

SageMaker JumpStart is a machine learning (ML) hub that can help accelerate your ML journey. With SageMaker JumpStart, you can evaluate, compare, and select foundation models (FMs) quickly based on predefined quality and responsibility metrics to perform tasks like article summarization and image generation. Pretrained models are fully customizable for your use case with your data, and you can effortlessly deploy them into production with the user interface or SDK. In addition, you can access prebuilt solutions to solve common use cases and share ML artifacts, including ML models and notebooks, within your organization to accelerate ML model building and deployment. None of your data is used to train the underlying models. All the data is encrypted and is never shared with third-party vendors so you can trust that your data remains private and confidential.

Check out the SageMaker JumpStart model page for available models.

Slack AI

Slack launched Slack AI to provide native generative AI capabilities so that customers can easily find and consume large volumes of information quickly, enabling them to get even more value out of their shared knowledge in Slack.  For example, users can ask a question in plain language and instantly get clear and concise answers with enhanced search. They can catch up on channels and threads in one click with conversation summaries. And they can access personalized, daily digests of what’s happening in select channels with the newly launched recaps.

Because trust is Slack’s most important value, Slack AI runs on an enterprise-grade infrastructure they built on AWS, upholding the same security practices and compliance standards that customers expect. Slack AI is built for security-conscious customers and is designed to be secure by design—customer data remains in-house, data is not used for LLM training purposes, and data remains siloed.

Solution overview

SageMaker JumpStart provides access to many LLMs, and Slack selects the right FMs that fit their use cases. Because these models are hosted on Slack’s owned AWS infrastructure, data sent to models during invocation doesn’t leave Slack’s AWS infrastructure. In addition, to provide a secure solution, data sent for invoking SageMaker models is encrypted in transit. The data sent to SageMaker JumpStart endpoints for invoking models is not used to train base models. SageMaker JumpStart allows Slack to support high standards for security and data privacy, while also using state-of-the-art models that help Slack AI perform optimally for Slack customers.

SageMaker JumpStart endpoints serving Slack business applications are powered by AWS instances. SageMaker supports a wide range of instance types for model deployment, which allows Slack to pick the instance that is best suited to support latency and scalability requirements of Slack AI use cases. Slack AI has access to multi-GPU based instances to host their SageMaker JumpStart models. Multiple GPU instances allow each instance backing Slack AI’s endpoint to host multiple copies of a model. This helps improve resource utilization and reduce model deployment cost. For more information, refer to Amazon SageMaker adds new inference capabilities to help reduce foundation model deployment costs and latency.

The following diagram illustrates the solution architecture.

To use the instances most effectively and support the concurrency and latency requirements, Slack used SageMaker-offered routing strategies with their SageMaker endpoints. By default, a SageMaker endpoint uniformly distributes incoming requests to ML instances using a round-robin algorithm routing strategy called RANDOM. However, with generative AI workloads, requests and responses can be extremely variable, and it’s desirable to load balance by considering the capacity and utilization of the instance rather than random load balancing. To effectively distribute requests across instances backing the endpoints, Slack uses the LEAST_OUTSTANDING_REQUESTS (LAR) routing strategy. This strategy routes requests to the specific instances that have more capacity to process requests instead of randomly picking any available instance. The LAR strategy provides more uniform load balancing and resource utilization. As a result, Slack AI noticed over a 39% latency decrease in their p95 latency numbers when enabling LEAST_OUTSTANDING_REQUESTS compared to RANDOM.

For more details on SageMaker routing strategies, see Minimize real-time inference latency by using Amazon SageMaker routing strategies.

Conclusion

Slack is delivering native generative AI capabilities that will help their customers be more productive and easily tap into the collective knowledge that’s embedded in their Slack conversations. With fast access to a large selection of FMs and advanced load balancing capabilities that are hosted in dedicated instances through SageMaker JumpStart, Slack AI is able to provide rich generative AI features in a more robust and quicker manner, while upholding Slack’s trust and security standards.

Learn more about SageMaker JumpStart, Slack AI and how the Slack team built Slack AI to be secure and private. Leave your thoughts and questions in the comments section.


About the Authors

Jackie Rocca is VP of Product at Slack, where she oversees the vision and execution of Slack AI, which brings generative AI natively and securely into Slack’s user experience. Now she’s on a mission to help customers accelerate their productivity and get even more value out of their conversations, data, and collective knowledge with generative AI. Prior to her time at Slack, Jackie was a Product Manager at Google for more than six years, where she helped launch and grow Youtube TV. Jackie is based in the San Francisco Bay Area.

Rachna Chadha is a Principal Solutions Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that the ethical and responsible use of AI can improve society in the future and bring economic and social prosperity. In her spare time, Rachna likes spending time with her family, hiking, and listening to music.

Marc Karp is an ML Architect with the Amazon SageMaker Service team. He focuses on helping customers design, deploy, and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.

Maninder (Mani) Kaur is the AI/ML Specialist lead for Strategic ISVs at AWS. With her customer-first approach, Mani helps strategic customers shape their AI/ML strategy, fuel innovation, and accelerate their AI/ML journey. Mani is a firm believer of ethical and responsible AI, and strives to ensure that her customers’ AI solutions align with these principles.

Gene Ting is a Principal Solutions Architect at AWS. He is focused on helping enterprise customers build and operate workloads securely on AWS. In his free time, Gene enjoys teaching kids technology and sports, as well as following the latest on cybersecurity.

Alan Tan is a Senior Product Manager with SageMaker, leading efforts on large model inference. He’s passionate about applying machine learning to the area of analytics. Outside of work, he enjoys the outdoors.