AWS Lambda Getting Started
Choose your own path
Page topics
- Path 1: Interactive Web and API-based Microservices
6
- Path 2: Batch Data Processing
7
- Path 3: Real-Time Data Proccessing
5
- Path 4: Generative AI
1
- Path 4-1: Agentic AI
5
- Path 4-2: Chatbot
5
- Path 4-3: Intelligent Document Processing (IDP)
5
- Path 4-4: Content Generation
5
- Path 5: No use case in mind? Start with AWS Lambda 101
4
Path 1: Interactive Web and API-based Microservices
Open allOverview
Use AWS Lambda on its own or combined with other AWS services to build powerful web applications, microservices and APIs that help you to gain agility, reduce operational complexity, reduce cost and scale automatically.
Step 1: Get Started with Lambda HTTP
Learn how to build a dynamic web page from a single Lambda function. You will start off by assigning HTTPS endpoints to your Lambda function, which uses a Lambda Function URL to call your function directly without having to learn, configure and operate additional services. This is ideal for single-function microservices. Learn more
Step 2: Use Lambda with Amazon API Gateway
Next, you will use Amazon API Gateway to create a REST API and a resource (Amazon DynamoDB). When you call the API through an HTTPS endpoint, API Gateway invokes the Lambda function. This is ideal for microservices with multiple functions, leveraging Amazon API Gateway to map each function to API endpoints, methods and resources. Learn more
Step 3: Build a Basic Web Application
Now you are ready to create a simple web application using AWS Lambda, Amazon API Gateway, Amazon DynamoDB, and AWS Amplify Console. You will first build a static web app that renders "Hello World." Then you will learn how to add functionality to the web app so the text that displays is based on a custom input you provide. Learn more
Step 4: Build a Multi-Microservice Web Application
Finally, you'll create a serverless web app with multiple microservices. You will host a static website, manage user authentication and build a serverless backend using AWS Amplify Console, Amazon Cognito, AWS Lambda, Amazon API Gateway, and Amazon DynamoDB. Learn more
Reference Architecture
This web reference architecture demonstrates how to use AWS Lambda in conjunction with other AWS services to build a serverless web app. This repository contains sample code for all the Lambda functions that make up the back end of the application. Learn more
Path 2: Batch Data Processing
Open allOverview
AWS Lambda enables automated batch processing of large datasets without infrastructure management, automatically scaling resources to match your workload demands. Process files, transform data, and run parallel jobs with the flexibility of rapid scaling and the efficiency of pay-per-use pricing.
Step 1: Use an Amazon S3 trigger to create thumbnail images
Start off by creating a Lambda function and configure a trigger for Amazon S3. For each image file uploaded to an S3 bucket, Amazon S3 invokes a function which reads the image object from the source S3 bucket and creates a thumbnail image to save in a target S3 bucket. Learn more
Step 2: Large-Scale Parallel Data Processing
Also, learn how to orchestrate large-scale parallel workloads which convert .mp4 and .mov files from S3 into multiple Gif animations for timeline scrubbing. With distributed map from AWS Step Functions, jobs scale up quickly invoking thousands of parallel Lambda functions to complete jobs faster. Learn more
Step 3: Serverless Image Processing Hands-on Workshop
Next, you will learn how to build an image processing workflow in response to an image uploaded to Amazon S3 using a simple, powerful, fully managed service AWS Step Functions together with AWS Lambda, Amazon DynamoDB, and Amazon Simple Notification Service (SNS). Learn more
Step 4: Building Scalable Data Processing Applications
In this blog series, learn more on how to design and deploy serverless applications designed around the Amazon S3-to-AWS Lambda architecture patterns. The solutions presented use AWS services to create scalable serverless architectures, using minimal custom code. Learn more
Step 5: Pay as You Go Machine Learning Inference with AWS Lambda
Learn how to deploy machine learning models for real-time inference using AWS Lambda functions which can now mount an Amazon Elastic File System (EFS). With this, you can create a Lambda function that loads the Python packages and model from EFS, and performs the prediction based on a test event. Learn more
Reference Architecture
This Real-time File Processing reference architecture is a general-purpose, event-driven, parallel data processing architecture that uses AWS Lambda. This architecture is ideal for workloads that need more than one data derivative of an object. Learn more
Path 3: Real-Time Data Proccessing
Open allOverview
Real-time streaming data empowers you to gain instant analytical insights and deliver enhanced customer experiences. Using AWS Lambda with streaming sources such as Amazon Kinesis, Amazon Managed Streaming for Apache Kafka (MSK) and Apache Kafka, you can build responsive applications that analyze and act on continuous data streams without managing infrastructure.
Step 1: Introducing Streaming
Read this end-to-end guide to learn how to use AWS Lambda to process Apache Kafka streams. Learn more
Step 2: Use AWS Lambda with Amazon Managed Streaming for Apache Kafka (MSK)
Amazon MSK is a streaming data service that manages Apache Kafka infrastructure and operations. Follow this tutorial to learn how to create a Lambda function that consumes events using Lambda's MSK event source mapping. Learn more
Step 3: Performance improvements when processing Apache Kafka with AWS Lambda
Enhance your stream-based real-time processing performance with scaling improvements when processing Apache Kafka with AWS Lambda, provisioned mode for Kafka Event Source Mappings with AWS Lambda, and AWS Lambda native support for Avro and Protobuf formatted Apache Kafka events.
Step 4: Stream Data Processing Workshop
Finally, you will build a comprehensive serverless data processing application using AWS Glue Schema Registry for managing Avro schemas for serialization and deserialization of messages and AWS Lambda for processing real-time data streams from Amazon MSK. Learn more
Path 4: Generative AI
Open allOverview
The generative AI landscape evolves rapidly, driven by the surge in large language models (LLMs) that meet diverse needs. Organizations build distributed architectures to leverage specific LLMs based on unique requirements. AWS serverless architecture, powered by AWS Lambda, enables you to build generative AI applications that start small and scale seamlessly while handling distributed, event-driven workflows securely.
Path 4-1: Agentic AI
Open allOverview
AI agents autonomously perform tasks, make decisions, and interact with systems on your behalf. Deploy them to AWS Lambda, orchestrate workflows with AWS Step Functions, and maintain full control over models and workflows tailored to your organization’s needs. The AWS Serverless Model Context Protocol (MCP) Server provides real-time guidance to enhance your serverless development.
Step 1: A comprehensive guide to implement agents, tools, and function calling
Watch this tutorial video to learn how to create and configure Amazon Bedrock agents with custom instructions and action groups. You'll connect these agents to AWS Lambda functions for external capabilities, master debugging techniques, and gain practical experience with agent interaction tracing.
Step 2: Deploy your first AI Agent
Create an AI Agent using Strands Agents SDK on AWS Lambda. Follow our sample implementation to build a user-aware AI Agent with integrated MCP Server capabilities.
Step 3: Build robust AI applications faster
Our two-part blog series guides you through creating reliable agents (Part 1) and implementing architectural patterns (Part 2). Explore a real-world aviation system to see how AWS Serverless orchestrates complex operations. Enhance your development with AWS MCP Servers for code assistants, Amazon Bedrock Agents, and Amazon ECS, Amazon EKS, and AWS Serverless.
Step 4: Scale enterprise-ready Agentic AI applications
Master Agentic AI implementation through the “Building and Scaling Agentic AI Workflows” workshop. Create production-ready generative AI applications that connect with your existing APIs and Knowledge Bases through the “Amazon Bedrock Agents” workshop.
Path 4-2: Chatbot
Open allOverview
Chatbots offer an immediate and impactful entry point for generative AI adoption. Whether deployed for 24/7 customer service automation, AI-powered help desk support, or sophisticated virtual assistants, AWS Lambda manages the entire chatbot workflow - pre- and post-processing of requests/responses, prompt engineering, model selection, applying guardrails, and working with knowledge bases. The serverless architecture enables you to quickly implement and scale intelligent conversational solutions with the flexibility to switch between different large language models (LLMs) while maintaining security guardrails and full audit capabilities.
Step 1: Developing an enterprise-grade large language model (LLM) playground in under 6 weeks
Learn from this blog how Thomson Reuters builds chatbots with AWS Lambda to manage its entire conversational AI workflow—from processing user inputs to generating responses with large language models (LLMs)—and quickly implement intelligent solutions for customer service, help desk support, or virtual assistants using serverless architecture.
Step 2: Building RAG use cases with GenAI Chatbot on AWS
Start with the simple, no-code wizard to build AI applications for conversational search, AI-generated chatbots, text generation, and text summarization, from this solution library, all without requiring deep AI expertise. Reference this application example to deploy a multi-model chatbot that leverages various large language model (LLM) providers.
Step 3: Building a Serverless document chat with AWS Lambda and Amazon Bedrock
Learn from this technical blog how AWS Lambda 's event-driven serverless compute model perfectly complements semantic search for large language models (LLMs) by efficiently handling document-to-vector embedding during initial ingestion, while also powering serverless LangChain operations for natural language processing of PDF documents with vector search capabilities.
Step 4: Building secure and scalable real-time APIs for generative AI
Let this hands-on workshop guide you through building a secure, scalable API for generative AI application on AWS, progressing from basic experimentation to a production-ready solution that leverages serverless architecture while maintaining compliance with security and privacy standards.
Path 4-3: Intelligent Document Processing (IDP)
Open allOverview
Intelligent Document Processing (IDP) addresses critical enterprise pain points by automating the extraction, analysis, and validation of information from various document types. AWS serverless services enable you to build and manage complete IDP pipelines through event-driven workflows and asynchronous processing. AWS Lambda handles document processing at scale with native batch capabilities, while Amazon EventBridge and AWS Step Functions orchestrate complex workflows and integrate with large language models (LLMs) and existing business systems. This serverless approach delivers scalable, high-throughput IDP solutions that maintain accuracy and compliance while processing diverse document formats efficiently.
Step 1: Create a regulatory document authoring solution at a 40–45% reduction in authoring time
Step 2: Guidance for intelligent document processing (IDP) on AWS
Leverage this guidance on best practices and automated ML resource deployment for rapid proof-of-concept development, and explore this end-to-end architecture leveraging AWS Lambda, AWS Step Functions, Amazon Textract, AI/ML enrichment, and human review capabilities for comprehensive document processing automation.
Step 3: Enhancing AWS intelligent document processing with generative AI
Read this technical blog on how AWS serverless services, AWS Lambda, AWS Step Functions, and Amazon EventBridge, integrated with foundation models can help rapidly transform traditional manual, error-prone document processing into an automated, accurate, and scalable workflow that can extract, normalize, and summarize data from any document type.
Step 4: Document extraction and summarization
Use this workshop to build an automated solution that transforms manual document processing into an efficient workflow by extracting key data and generating summaries from PDF documents, enabling organizations across industries to quickly access and analyze critical business information. In this multi-level workshop, discover how to build automated, scalable document processing workflows using AWS AI services that extract valuable information from various document formats, replacing time-consuming manual processes and handle complex content across industries, such as insurance claims, mortgages, healthcare claims, contracts, and legal contracts.
Path 4-4: Content Generation
Open allOverview
Content generation represents one of the highest-demand generative AI applications across industries, from marketing and media to software development and technical documentation. AWS serverless services, AWS Lambda and AWS Step Functions, enable you to build sophisticated prompt chaining workflows that incorporate human-in-the-loop validation and refinement at key stages. This serverless approach allows for automated content generation while maintaining human oversight, ensuring quality and brand alignment through a series of coordinated prompts and review steps.
Step 1: Scale content creation and enhance hotel descriptions in under 10 seconds
Learn from this blog how TUI Group maintains and monitors content quality at scale, leveraging AWS Step Functions to orchestrate a sophisticated content generation workflow by coordinating AWS Lambda functions to process batch requests across multiple large language models (LLMs), validate SEO scores through third-party APIs, and automatically store high-quality content (above 80% score threshold) in Amazon DynamoDB for team review through a front-end UI.
Step 2: Prompt chaining with human in the loop
This application example demonstrates the use of prompt chaining to decompose bulk and inefficient prompt in to smaller prompts and using purpose built models using AWS Lambda and AWS Step Functions. The example also shows how to include a human feedback loop when you need to improve the safety and accuracy of the application.
Step 3: Building Generative AI prompt chaining workflows with human in the loop
Learn from this blog post how AWS Step Functions orchestrates a sophisticated prompt chaining workflow that breaks down complex large language model (LLM) tasks into manageable sub-tasks, incorporating AWS Lambda functions for processing, human-in-the-loop reviews through task tokens, and event-driven architecture using Amazon EventBridge for extensible system integration.
Step 4: Amazon Bedrock serverless prompt chaining
Learn how this application example demonstrates building complex generative AI applications using prompt chaining techniques through either AWS Step Functions (for orchestrating AWS Lambda functions and 220+ AWS services) or Amazon Bedrock Flows (purpose-built for Bedrock-specific AI workflows), both offering serverless scalability with features like parallel processing, conditional logic, and human input handling.
Path 5: No use case in mind? Start with AWS Lambda 101
Open allOverview
New to AWS Lambda? Follow along the steps in this path, and build your first functional Lambda function with an event trigger.
Step 1: Log into Your AWS Account
First, log into the AWS Management Console and set up your root account. With the AWS Free Tier, you get 1 million free requests per month.
Step 2: Your First Lambda Function
Next, you will be ready to create and deploy a simple serverless Hello World function using the Lambda console, and review your output metrics. Learn more
Step 3: Set Up Triggers for Lambda
Finally, set up an event trigger for Amazon S3 that will invoke your Lambda function when an event occurs. Learn more