AWS for Industries

Tapestry Makes Enterprise Knowledge More Accessible Using Generative AI on AWS

Generative artificial intelligence (AI) has the potential to transform how large companies do business. For enterprises, corporate information is often stored in silos managed by different teams. By combining large language models (LLMs) with robust knowledge bases, these organizations can create intelligent systems that not only store information but understand it—making company knowledge more accessible and actionable than ever before.

Tapestry, the technology-driven parent company of iconic brands such as Coach and Kate Spade, is at the forefront of using generative AI for enterprise transformation. Using Amazon Web Services (AWS), the company has developed an intelligent knowledge management system that centralizes access to data across business units, empowering employees to make faster, better decisions.

Transforming Enterprise Knowledge Management Using AI on AWS

Tapestry has pioneered a culture of technological innovation within the luxury retail sector, which is demonstrated by its willingness to experiment with and implement new solutions. This commitment is evident in its approach to AI: a methodical, security-focused approach that is designed to create business value.

“We wanted to build an internal generative AI solution before we explored public applications,” says Aravind Narasimhan, vice president of application technologies at Tapestry. “This would give us the opportunity to try out the technology, learn new things, and educate our corporate partners.”

Like many large enterprises, Tapestry faced challenges with knowledge management. Information was siloed within IT, human resources, legal, and other teams, making it difficult for employees to quickly find the answers they needed. Standard operating procedures, policies, and institutional knowledge were also scattered across different systems and formats. Tapestry saw an opportunity to address this challenge with generative AI. Amazon Bedrock, which offers a choice of high-performing foundation models, and Amazon Bedrock Knowledge Bases, which can give foundation models and agents contextual information from a company’s private data sources, provided the ideal foundation for building an enterprise-scale solution.

“Amazon Bedrock Knowledge Bases is more of a plug-and-play solution with minimal coding effort,” says Karthigeyan Ramakrishnan, director of applications at Tapestry. “You have a choice of embedding models, various vector stores, and LLMs. You can configure the chunking, the tokens, and everything else easily. That’s why we love it.”

Creating a Company-Wide Foundation for Seamless Information Sharing

The development process spanned 4 months, beginning with a proof of concept in the IT department. The knowledge management system uses multiple AWS services. At its core, Amazon Bedrock Knowledge Bases is used to manage and organize document repositories, and Claude 3 Haiku serves as the LLM for processing natural language queries. To generate text embeddings that capture semantic meaning, Tapestry uses Amazon Titan foundation models in Amazon Bedrock, which provide a breadth of high-performing image, multimodal, and text model choices. Additionally, Amazon Aurora for PostgreSQL—which provides unparalleled high performance and availability at a global scale—serves as the vector store and indexes the text embeddings to facilitate fast, efficient semantic search across millions of document chunks.

The Tapestry team architected the knowledge management solution with distinct public and private knowledge stores, each configured independently to meet departmental needs. The public knowledge store is accessible to any corporate user and contains general operational information, and the private stores are restricted by role-based access control for sensitive departmental data. This separation is achieved through two ways: (1) the use of AWS Identity and Access Management (AWS IAM) roles, which securely manage identities and access to AWS services and resources, and (2) the incorporation of the solution into Tapestry’s single sign-on system.

The solution’s serverless architecture creates a seamless experience for users while maintaining global scalability. Employees can access the knowledge management system from almost anywhere in the world through a web-based chatbot interface that is powered by Amazon CloudFront, a service that securely delivers content. When a user submits a query, AWS Lambda—which runs code in response to events and automatically manages the compute resources—deploys functions that process the request, handle the necessary security checks, and route the query to the appropriate knowledge base.

Serverless Application With Bedrock

AWS Lambda functions also manage the interactions between Amazon Bedrock for natural language processing and the Aurora PostgreSQL vector store for document retrieval. All source documents and metadata are stored in Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve virtually any amount of data from anywhere.

A key feature of the knowledge management system is its ability to maintain fresh, updated content. The Tapestry team implemented automated processes to regularly scan and update the knowledge bases so that any changes in source documents are reflected in the system. The team developed an automated pipeline that monitors document repositories for changes, processes new or modified documents using custom chunking strategies that are optimized for different content types, and generates new embeddings using Amazon Titan foundation models. Those updated embeddings are then automatically indexed in the Aurora PostgreSQL vector store so that the knowledge base remains current.

Driving Business Value through Enhanced Knowledge Management

Tapestry is actively expanding the system’s capabilities to create an even more powerful solution. Its plans include implementing image search capabilities to extract and query information from visual content, in addition to incorporating the solution into other enterprise systems to produce near real-time data analytics and reporting.

“We’re working to make the system more intelligent and interactive with user responses,” says Ramakrishnan. “Right now, we have text search, and we’re looking at how we can introduce voice capabilities. We’re also working on incorporating structured data, where people could ask questions about sales and inventory, store traffic, and orders placed.”

Using generative AI on AWS, Tapestry is breaking down information silos, accelerating decision-making, and preserving valuable institutional knowledge that might otherwise be lost. The solution has significantly reduced the time employees spend searching for information and waiting for responses from subject matter experts. In turn, these capabilities have freed staff to focus on more strategic, innovative work.

“Our departments benefit from generative AI on AWS because they’re responding less to typical questions, and, from a user standpoint, they get what they want quicker,” says Narasimhan. “We become more agile and can move faster.”

Nishant Singh

Nishant Singh

Nishant, Senior Customer Solutions Manager (CSM) at AWS, where he dedicates his expertise to retail and Consumer Packaged Goods (CPG) industries. His mission centers on empowering customers to architect and implement value-driven solutions that deliver measurable business outcomes through AWS technologies. With a customer-first mindset, he focuses on transforming business challenges into opportunities for growth, innovation, and enhanced operational efficiency using the comprehensive capabilities of the AWS platform.

Aditya Pendyala

Aditya Pendyala

Aditya is a Principal Solutions Architect at AWS based out of NYC. He has extensive experience in architecting cloud-based applications. He is currently working with large enterprises to help them craft highly scalable, flexible, and resilient cloud architectures, and guides them on all things cloud. He has a Master of Science degree in Computer Science from Shippensburg University and believes in the quote “When you cease to learn, you cease to grow.”

Aravind Narasimhan

Aravind Narasimhan

Aravind Narasimhan is a seasoned senior leader, who blends strategic IT vision, architecture, technology assessment, and feasibility planning in high-performance environments. Over the course of his career, he has led the technical delivery of Omnni, Cloud and ERP transformations. He also led the development and establishment of the RPA practice. In addtion, Aravind heads the AI Center of Excellence (COE) at Tapestry, driving innovation and excellence in artificial intelligence. Before joining Tapestry, he spearheaded multiple initiatives across planning, merchandising, POS, and BI solutions in the retail industry. Aravind's academic background includes an engineering degree from Bangalore University and an MBA in Finance from Rutgers University.

Karthigeyan Ramakrishnan

Karthigeyan Ramakrishnan

Karthigeyan Ramakrishnan is a seasoned IT professional with 20+ years of experience in leading, architecting, consulting and delivering novel IT solutions. Karthigeyan has worked in the Retail IT industry spanning different functional areas ranging from warehouse management, planning, merchandising, customer service, automation and Gen AI. He has global consulting experience in various markets encompassing Europe, India, South America and the U.S. At Tapestry, he leads the Planning and Automation space. He is passionate about Gen AI and enjoys leveraging Gen AI to not just solve problems but identify new opportunities that are yet to be unearthed. He has a degree in Chemical Engineering from Anna University, Chennai and loves applying reactor design principles to technology solutions.