Amazon Web Services

In this comprehensive tutorial, Wanjiko Kahara demonstrates how to create an end-to-end data pipeline using Amazon Bedrock Agents. The video covers four key steps: data ingestion with Amazon S3, integrating Knowledge Bases for Amazon Bedrock, setting up Agents for Amazon Bedrock, and testing and deployment. Kahara provides a low-code approach, making the process accessible to users of all technical backgrounds. She explains concepts like Retrieval-Augmented Generation (RAG) and showcases the powerful features of Amazon Bedrock, including customizable knowledge bases and agents that can understand and fulfill user requests through natural conversation.

00:00 Intro
01:55 Amazon S3 Integration
02:43 Amazon Bedrock
03:25 Understanding RAG
05:15 Amazon Bedrock Knowledge Bases
08:51 Amazon Bedrock Agents
12:26 Test Agent
13:48 Closing

product-information
skills-and-how-to
generative-ai
ai-ml
analytics
Show 6 more

Up Next

VideoThumbnail
30:23

T3-2 Amazon SageMaker Canvasで始めるノーコード機械学習 (Level 200)

Jun 27, 2025
VideoThumbnail
31:49

T2-3 AWS を使った生成 AI アプリケーション開発 (Level 300)

Jun 27, 2025
VideoThumbnail
26:05

T4-4: AWS 認定 受験準備の進め方 AWS Certified Solutions Architect – Associate 編 後半

Jun 26, 2025
VideoThumbnail
32:15

T3-1: はじめてのコンテナワークロード - AWS でのコンテナ活用の第一歩

Jun 26, 2025
VideoThumbnail
29:37

BOS-09: はじめてのサーバーレス - AWS Lambda でサーバーレスアプリケーション開発 (Level 200)

Jun 26, 2025