Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Skip to main content

AWS Lambda Getting Started

Choose your own path

AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you, making it easier to build applications that respond quickly to new information. No matter whether you are new to AWS Lambda or you already have a use case in mind, choose your own path and follow the curated learning steps to get started on AWS Lambda.

Path 1: Interactive Web and API-based Microservices

Open all

Use AWS Lambda on its own or combined with other AWS services to build powerful web applications, microservices and APIs that help you to gain agility, reduce operational complexity, reduce cost and scale automatically.

Learn how to build a dynamic web page from a single Lambda function. You will start off by assigning HTTPS endpoints to your Lambda function, which uses a Lambda Function URL to call your function directly without having to learn, configure and operate additional services. This is ideal for single-function microservices. Learn more

Next, you will use Amazon API Gateway to create a REST API and a resource (Amazon DynamoDB). When you call the API through an HTTPS endpoint, API Gateway invokes the Lambda function. This is ideal for microservices with multiple functions, leveraging Amazon API Gateway to map each function to API endpoints, methods and resources. Learn more

Now you are ready to create a simple web application using AWS Lambda, Amazon API Gateway, Amazon DynamoDB, and AWS Amplify Console. You will first build a static web app that renders "Hello World." Then you will learn how to add functionality to the web app so the text that displays is based on a custom input you provide. Learn more

Finally, you'll create a serverless web app with multiple microservices. You will host a static website, manage user authentication and build a serverless backend using AWS Amplify Console, Amazon Cognito, AWS Lambda, Amazon API Gateway, and Amazon DynamoDB. Learn more

This web reference architecture demonstrates how to use AWS Lambda in conjunction with other AWS services to build a serverless web app. This repository contains sample code for all the Lambda functions that make up the back end of the application. Learn more

Path 2: Batch Data Processing

Open all

Serverless allows you to ingest, process and analyze high volumes of data quickly and efficiently. Learn how to build a scalable serverless data processing solution. Use Amazon Simple Storage Service (Amazon S3) to trigger for data processing or load machine learning (ML) models from Amazon Elastic File System (EFS) to AWS Lambda to perform ML inference in real time.

Start off by creating a Lambda function and configure a trigger for Amazon S3. For each image file uploaded to an S3 bucket, Amazon S3 invokes a function which reads the image object from the source S3 bucket and creates a thumbnail image to save in a target S3 bucket. Learn more

Also, learn how to orchestrate large-scale parallel workloads which convert .mp4 and .mov files from S3 into multiple Gif animations for timeline scrubbing. With distributed map from AWS Step Functions, jobs scale up quickly invoking thousands of parallel Lambda functions to complete jobs faster. Learn more

Next, you will learn how to build an image processing workflow in response to an image uploaded to Amazon S3 using a simple, powerful, fully managed service AWS Step Functions together with AWS Lambda, Amazon DynamoDB, and Amazon Simple Notification Service (SNS). Learn more

In this blog series, learn more on how to design and deploy serverless applications designed around the Amazon S3-to-AWS Lambda architecture patterns. The solutions presented use AWS services to create scalable serverless architectures, using minimal custom code. Learn more

Learn how to deploy machine learning models for real-time inference using AWS Lambda functions which can now mount an Amazon Elastic File System (EFS). With this, you can create a Lambda function that loads the Python packages and model from EFS, and performs the prediction based on a test event. Learn more

This Real-time File Processing reference architecture is a general-purpose, event-driven, parallel data processing architecture that uses AWS Lambda. This architecture is ideal for workloads that need more than one data derivative of an object. Learn more

Path 3: Real-Time Data Proccessing

Open all

Streaming data allows you to gather analytical insights and act upon them, but also presents a unique set of design and architectural challenges. Learn how to achieve several general goals of streaming data workloads by using AWS Lambda and Amazon Kinesis to capture the messages, to process and aggregate the records and finally to load the results into other downstream systems for analysis or further processing.

Amazon Kinesis is a service that makes it easy to collect, process and analyze video and data streams in real-time. You will start off by creating a Lambda function to consume events from a Kinesis stream. Learn more

Next, you will build a comprehensive serverless data processing application to process real-time data streams using Amazon Kinesis to create data streams and AWS Lambda to process streams in real-time. Learn more

Finally, read this blog series to learn how to build a streaming data backend for a home fitness system by using a serverless approach. You will learn key streaming concepts and how to handle these in a serverless workload. Learn more

This reference architecture will use AWS Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, click stream analysis, data cleansing, metrics generation, log filtering, indexing, social media analysis, and IoT device data telemetry and metering. Learn more

Path 4: Generative AI

Open all

The generative AI landscape is evolving rapidly, and organizations need to innovate and adapt quickly to maintain competitive advantage. This evolution is catalyzed by a significant surge in large language models (LLMs) that meet diverse needs. Organizations are building distributed architectures that leverage specific LLMs based on unique requirements. AWS serverless architecture, powered by AWS Lambda, is ideal for generative AI applications, enabling you to start small and scale seamlessly while handling distributed, event-driven workflows securely at scale.

 

Path 4-1: Agentic AI

Open all

AI agents enable you to build enterprise systems that autonomously handle complex tasks and make decisions. Use Amazon Bedrock to create an agent in just a few quick steps by selecting a foundational model and providing it access to your enterprise systems and knowledge bases. Use AWS serverless services, AWS Lambda and AWS Step Functions, to securely execute your APIs and orchestrate workflows, while maintaining full control over models and workflows tailored to your organization’s specific business needs.

Learn how to create and configure Amazon Bedrock agents with custom instructions and action groups, connecting them to AWS Lambda functions for external capabilities like time-telling and arithmetic operations, while gaining practical insights into debugging and tracing agent interactions through this hands-on tutorial.

Explore best practices for building generative AI applications in this two-part blog series, covering both the fundamentals of creating accurate and reliable agents in Part 1, and diving into architectural considerations and development lifecycle practices in Part 2. Discover a real-world use case of on-demand private aviation system to learn how AWS Serverless helps you build dynamic, personalized experiences that efficiently orchestrate complex operations behind the scenes.

Learn how “tool use” (also referred to as function calling) allows AI models to invoke external services or execute specific code (such as APIs or AWS Lambda functions) in response to user input in the “Building a virtual assistant with tool use” workshop. Get hands-on experience working with Amazon Bedrock Agents to create Generative AI applications that connect with existing APIs and Knowledge Bases in the “Amazon Bedrock Agents” workshop.

Path 4-2: Chatbot

Open all

Chatbots offer an immediate and impactful entry point for generative AI adoption. Whether deployed for 24/7 customer service automation, AI-powered help desk support, or sophisticated virtual assistants, AWS Lambda manages the entire chatbot workflow - pre- and post-processing of requests/responses, prompt engineering, model selection, applying guardrails, and working with knowledge bases. The serverless architecture enables you to quickly implement and scale intelligent conversational solutions with the flexibility to switch between different large language models (LLMs) while maintaining security guardrails and full audit capabilities.

Learn from this blog how Thomson Reuters builds chatbots with AWS Lambda to manage its entire conversational AI workflow—from processing user inputs to generating responses with large language models (LLMs)—and quickly implement intelligent solutions for customer service, help desk support, or virtual assistants using serverless architecture.

Start with the simple, no-code wizard to build AI applications for conversational search, AI-generated chatbots, text generation, and text summarization, from this solution library, all without requiring deep AI expertise. Reference this application example to deploy a multi-model chatbot that leverages various large language model (LLM) providers.

Learn from this technical blog how AWS Lambda 's event-driven serverless compute model perfectly complements semantic search for large language models (LLMs) by efficiently handling document-to-vector embedding during initial ingestion, while also powering serverless LangChain operations for natural language processing of PDF documents with vector search capabilities.

Let this hands-on workshop guide you through building a secure, scalable API for generative AI application on AWS, progressing from basic experimentation to a production-ready solution that leverages serverless architecture while maintaining compliance with security and privacy standards.

Path 4-3: Intelligent Document Processing (IDP)

Open all

Intelligent Document Processing (IDP) addresses critical enterprise pain points by automating the extraction, analysis, and validation of information from various document types. AWS serverless services enable you to build and manage complete IDP pipelines through event-driven workflows and asynchronous processing. AWS Lambda handles document processing at scale with native batch capabilities, while Amazon EventBridge and AWS Step Functions orchestrate complex workflows and integrate with large language models (LLMs) and existing business systems. This serverless approach delivers scalable, high-throughput IDP solutions that maintain accuracy and compliance while processing diverse document formats efficiently.

Learn from Accenture’s solution how AWS generative AI and serverless services, such as AWS Lambda , can transform the complex pharmaceutical regulatory document process by automating the creation of Common Technical Documents (CTDs), significantly reducing the 100,000 annual hours typically required while maintaining security and compliance standards.

Leverage this guidance on best practices and automated ML resource deployment for rapid proof-of-concept development, and explore this end-to-end architecture leveraging AWS Lambda, AWS Step Functions, Amazon Textract, AI/ML enrichment, and human review capabilities for comprehensive document processing automation.

Read this technical blog on how AWS serverless services, AWS Lambda, AWS Step Functions, and Amazon EventBridge, integrated with foundation models can help rapidly transform traditional manual, error-prone document processing into an automated, accurate, and scalable workflow that can extract, normalize, and summarize data from any document type.

Use this workshop to build an automated solution that transforms manual document processing into an efficient workflow by extracting key data and generating summaries from PDF documents, enabling organizations across industries to quickly access and analyze critical business information. In this multi-level workshop, discover how to build automated, scalable document processing workflows using AWS AI services that extract valuable information from various document formats, replacing time-consuming manual processes and handle complex content across industries, such as insurance claims, mortgages, healthcare claims, contracts, and legal contracts.

Path 4-4: Content Generation

Open all

Content generation represents one of the highest-demand generative AI applications across industries, from marketing and media to software development and technical documentation. AWS serverless services, AWS Lambda and AWS Step Functions, enable you to build sophisticated prompt chaining workflows that incorporate human-in-the-loop validation and refinement at key stages. This serverless approach allows for automated content generation while maintaining human oversight, ensuring quality and brand alignment through a series of coordinated prompts and review steps.

Learn from this blog how TUI Group maintains and monitors content quality at scale, leveraging AWS Step Functions to orchestrate a sophisticated content generation workflow by coordinating AWS Lambda functions to process batch requests across multiple large language models (LLMs), validate SEO scores through third-party APIs, and automatically store high-quality content (above 80% score threshold) in Amazon DynamoDB for team review through a front-end UI.

This application example demonstrates the use of prompt chaining to decompose bulk and inefficient prompt in to smaller prompts and using purpose built models using AWS Lambda and AWS Step Functions. The example also shows how to include a human feedback loop when you need to improve the safety and accuracy of the application.

Learn from this blog post how AWS Step Functions orchestrates a sophisticated prompt chaining workflow that breaks down complex large language model (LLM) tasks into manageable sub-tasks, incorporating AWS Lambda functions for processing, human-in-the-loop reviews through task tokens, and event-driven architecture using Amazon EventBridge for extensible system integration.

Learn how this application example demonstrates building complex generative AI applications using prompt chaining techniques through either AWS Step Functions (for orchestrating AWS Lambda functions and 220+ AWS services) or Amazon Bedrock Flows (purpose-built for Bedrock-specific AI workflows), both offering serverless scalability with features like parallel processing, conditional logic, and human input handling.

Path 5: No use case in mind? Start with AWS Lambda 101

Open all

New to AWS Lambda? Follow along the steps in this path, and build your first functional Lambda function with an event trigger.

First, log into the AWS Management Console and set up your root account. With the AWS Free Tier, you get 1 million free requests per month.

Next, you will be ready to create and deploy a simple serverless Hello World function using the Lambda console, and review your output metrics. Learn more

Finally, set up an event trigger for Amazon S3 that will invoke your Lambda function when an event occurs. Learn more