AWS Open Source Blog
Introducing Strands Labs: Get hands-on today with state-of-the-art, experimental approaches to agentic development
We’re introducing Strands Labs, a new Strands GitHub organization designed to give developers the ability to get hands-on with experimental, state-of-the-art approaches to agentic AI development. The Strands Agents SDK – available for both Python and TypeScript – has gained incredible traction in the developer community since we released it as open source in May of 2025. The SDK has been downloaded 14 million + times, and the AWS team has been hard at work adding new functionality, including experiments like Steering, to support a very active developer community. Strands’ model-driven approach has proven itself as simple, powerful and scalable for everything from prototyping to enterprise production workloads. Learn more about Strands and the model-driven approach here.
We’ve chosen to make Strands Labs a separate GitHub organization to encourage innovation through experimentation, and to push the frontier of agentic development. We’ve also opened Strands Labs to all the development teams across Amazon – meaning, they can all contribute their innovative open source projects for community use and feedback. This model will encourage faster experimentation, learning, and growth for Strands’ community of developers, without coupling experiments to the Strands SDK and its production use release cycle. You can expect all projects in Strands-Labs to ship with clear use cases, functional code, and tests to help you get started.
At launch, we’re making Strands Labs available with three projects. The first is Robots, the second is Robots Sim and the third is AI Functions.
- Robots: With Robots, we’re exploring how AI agents extend to the edge and the physical world, where they don’t just process information but interact with the physical environment around us. Through a unified Strands Agents interface, physical AI agents can control diverse robots by connecting AI capabilities directly to physical sensors and hardware.
- Robots Sim: Robots Sim integrates your agentic robots with simulated 3D physics-enabled worlds, enabling rapid prototyping and algorithm development in a safe, simulated environment without requiring physical robotic hardware. It’s perfect for iterating on agent strategies, testing Vision-Language-Action (VLA) model policies, and validating approaches before real-world deployment.
- AI Functions: AI Functions let developers define an agent using natural language specifications instead of code, writing pre and post conditions in Python that validate behavior and generate working implementations. This experiment is intended to narrow the trust gap when generating code with LLMs by focusing developer time on how to validate their intention, letting the framework do the rest.
Let’s dive into each of these below to showcase how these projects push the frontier of agentic development.
Strands Robots
Agentic AI systems are rapidly expanding beyond the digital world and into the physical domain, where AI agents perceive, reason, and act in real environments. As AI systems increasingly interact with the physical world through robotics, autonomous vehicles, and smart infrastructure, a fundamental question emerges: How do we build agents that leverage massive cloud compute for complex reasoning while maintaining millisecond-level responsiveness for physical sensing and actuation?
Strands Robots provides the orchestration, intelligence, and infrastructure layer, transforming individual edge devices into coordinated agentic physical AI systems. Through this project, our aim is to democratize physical AI through simple APIs, open source libraries, and managed services.
Strands Robots extends the Strands Agents capability for:AI agents to control physical robots through a unified Strands Agents interface that connects AI agents to physical sensors and hardware. It also enables Rapid prototyping and algorithm development in a safe, simulated environment without requiring physical robotic hardware, which is perfect for iterating on agent strategies, testing VLA policies, and validating approaches before real-world deployment.
In this lab demonstration, a SO-101 robotic arm handles manipulation with the NVIDIA GR00T vision-language-action model (VLA). The VLA model combines visual perception, language understanding, and action prediction in a single model. GR00T takes camera images, robot joint positions, and language instructions as input and directly outputs new target joint positions. In partnership with NVIDIA, we integrated NVIDIA GR00T with Strands Agents and demonstrated the Strands agent to run on NVIDIA Jetson edge hardware to control the SO-101 robotic arm, showcasing how sophisticated AI capabilities can execute directly on embedded systems.
We additionally integrated with Hugging Face’s LeRobot that provides data and hardware interfaces that make working with robotics hardware accessible. By combining hardware abstractions like LeRobot with VLA models (e.g. NVIDIA GR00T), we can create edge AI applications that perceive, reason, and act in the physical world.
As part of this initiative and to make this easier for builders, we’ve released an experimental Robot class with a simple interface for connecting hardware to VLA models such as NVIDIA GR00T. For instance, to deploy an agent on an edge device to utilize the NVIDIA GR00T VLA model in conjunction with the SO-101 robotic arm for a task such as “picking and placing an apple into a basket,” the Strands Robot class can be employed as:
The Robot class running on edge devices can delegate complex reasoning to the cloud using LLMs and other models when needed. VLA models provide millisecond-level control for physical actions, but when the system encounters situations requiring deeper reasoning – like planning multi-step tasks or making decisions based on historical patterns – it can consult more powerful cloud-based agents.
Strands Robot Sim
The Strands Robot Simulation provides an environment for rapid prototyping Agentic Robotics without requiring physical robotics hardware. It supports Libero benchmark environments, saac-GR00T VLA policy support via ZMQ, an extensible interface for VLA providers, capture simulation episodes as MP4 videos, non-blocking simulation with status monitoring, fast testing without dependencies, and GR00T inference service management. This simulation project currently supports two execution modes: full episode execution with final results and iterative control with visual feedback per batch. The modular design of Strands Robot Simulation enables developers to swap policy implementations or simulation environments without restructuring core logic. The control loop executes steps sequentially, collecting observations from cameras and joint sensors, and feeding this data to policy models that generate motor commands within fixed-size action horizons.
For instance, the following example illustrates how to utilize the SimEnv class from strands_robots_sim to control simulated robots within Libero environments employing policies generated by the NVIDIA GR00T. This example assumes that Libero is installed, the GR00T inference service is operational on port 8000, and Docker with isaac-gr00t containers are accessible.
AI Functions
AI Functions introduces a new way to write code with agents where you write Python functions with natural language specifications instead of code. Using the @ai_function decorator, you define what you want a function to do through description and validation conditions. AI Functions leverages the Strands agent loop to generate the implementation, validate the output, and automatically retry if validation fails. Consider loading invoice data from files in unknown formats. Traditional approaches require determining the file format, writing transformation logic for each format, constructing prompts, parsing responses, and orchestrating retries when validation fails. This typically involves dozens of lines of code and may not account for every scenario. With AI Functions, you write a small function describing the desired output, and a validator function expressing what success looks like. The LLM determines the file format, writes the transformation code, and returns a real Python DataFrame object.
As we move forward, we expect to share more projects via Strands-Labs with the Strands developer community, and we look forward to your feedback to continue to make Strands better.
Dive into these new approaches to agentic AI and start experimenting today in Strands Labs.