AWS for Industries
Building In-Vehicle AI Assistants with Strands Agents
In today’s rapidly evolving automotive landscape, vehicle occupants expect to interact with their vehicle using natural spoken language. An AI based multi-agent assistant architecture, leveraging Strands Agents SDK, a simple-to-use, code-first framework for building agents, can help more seamlessly handle various commands triggered by occupant interactions.
Drivers demand intelligent, responsive, and personalized experiences that make every journey safer, more efficient, and more enjoyable. This blog will describe a reference architecture and demonstration that helps address these customer needs using AWS multi-agent workflows and shows how smart assistants can help evolve vehicles into more sophisticated travel companions that understand and anticipate drivers’ needs.
In-vehicle voice commands have evolved from basic driver assistants to AI-driven systems that integrate more deeply with daily driving experiences. Today’s automotive systems combine sensor data, real-time traffic analysis, and driver behavior patterns to help create responsive and adaptive driving environments. These systems help drivers monitor vehicle conditions, from tire pressure to engine performance, while processing external factors like traffic and weather conditions with the help of natural language processing (NLP) to help facilitate communication between vehicle occupants and the vehicle itself.
By analyzing vehicle systems and telematics data, these in-vehicle assistants can help transform routine maintenance from a reactive task to a proactive process. For example, an assistant can help drivers understand what their engine warning light means, help provide guidance on the severity of the issue, and help schedule service appointments while considering driver and service center availability.
The next section will describe the architecture that powers this rich set of features.
Multi-Agent Architecture Design using Strands Agents
Figure 1, shown below, illustrates a reference architecture that customers can use to help create agentic experiences. This architecture features a multi-agent workflow system, powered by Strands Agents, that functions as a well-coordinated AI-powered team.
Figure 1. In-Vehicle AI Assistant Reference Architecture
Here is the description of the numbered references on the architecture diagram shown above in Figure 1.
1. While driving, a dashboard warning light appears on the instrument cluster, the driver activates the AI assistant using a voice command. The system captures diagnostic data and converts the driver’s voice request to text. This information flows through Amazon API Gateway, which securely authenticates and routes both inputs to AWS Lambda and invokes the Strands Agents orchestrator agent for more prompt processing and response generation.
2. The orchestrator (supervisor) agent receives input, uses instructions to understand the input, and manages and delegates required tasks to the group of Strands Agents collaborator agents.
3. Vehicle manuals are placed in Amazon S3 by customers. Embeddings are stored in vector databases, provided by Amazon OpenSearch Service. The Vehicle Symptom tool interacts with Amazon Bedrock Knowledge Bases to analyze the issue, recommend diagnostic steps, and assign Low, Medium, or High severity ratings. This helps enable rapid problem identification and more tailored guidance for the driver based on the specific vehicle model and reported symptoms.
4. The group of collaborator tools, including a dealership tool, appointment availability tool, book appointment tool and a parts order tool, receives tasks from the orchestrator agent and work together, in parallel, to execute their various tasks.
4a. The collaborator tools use AWS Lambda actions to help integrate with data sources and perform reads and writes to data sources in real-time. These data sources include Amazon DynamoDB, which helps book service appointments, provide dealership information and appointment availability and locate replacement parts based on diagnostic error code.
4b. The group of collaborator agents invoke Amazon Nova’s large language models (LLMs), as needed. For instance, they might use Amazon Nova LLMs to help identify a specific service part in response to a customer inquiry.
5. The parts agent tool acts as a mini orchestrator, coordinating multiple sub-functions to help identify necessary automotive parts based on diagnostic codes. It checks real-time inventory at dealerships and can automatically initiate orders for out-of-stock items to help minimize service delays. This proactive approach helps ensure parts availability when the vehicle arrives for service.
Conclusion
The advent of AI-driven vehicle intelligence has a significant opportunity to help improve the driving experience. These systems are helping transform vehicles from mere modes of transportation into sophisticated, interactive companions that anticipate and respond to drivers’ needs. By using advanced multi-agent workflows, NLP and voice assistants, automakers can enhance their overall customer and in-vehicle experience.
The future of automotive AI promises deeper integration of services, enhanced semantic routing, and improved on-device processing. These advancements will further blur the lines between vehicle functionality and intelligent assistance, helping create a more intuitive and responsive driving experience.
Call to action
AWS has released code samples under AWS Samples GitHub repository. These samples demonstrate how to implement multi-agent collaboration using Strands Agents SDK, AWS Lambda, and other AWS services. The provided examples illustrate various patterns designed to help create and manage collaborative AI agents using AWS services. These resources serve as practical guides for those looking to implement similar solutions in their own projects and help offer insights into best practices and efficient integration of AWS services for AI-driven collaborative applications.