We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.
If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”
Customize cookie preferences
We use cookies and similar tools (collectively, "cookies") for the following purposes.
Essential
Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms.
Performance
Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes.
Allowed
Functional
Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly.
Allowed
Advertising
Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising.
Allowed
Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by selecting Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice.
Your privacy choices
We display ads relevant to your interests on AWS sites and on other properties, including cross-context behavioral advertising. Cross-context behavioral advertising uses data from one site or app to advertise to you on a different company’s site or app.
To not allow AWS cross-context behavioral advertising based on cookies or similar technologies, select “Don't allow” and “Save privacy choices” below, or visit an AWS site with a legally-recognized decline signal enabled, such as the Global Privacy Control. If you delete your cookies or visit this site from a different browser or device, you will need to make your selection again. For more information about cookies and how we use them, please read our AWS Cookie Notice.
Konten ini tidak tersedia dalam bahasa yang dipilih. Kami terus berusaha menyediakan konten kami dalam bahasa yang dipilih. Terima kasih atas pengertian Anda.
AWS Events Content
Experience our event content at your fingertips. Explore, view, and download presentation decks from your favorite sessions and discover what’s new. Learn from AWS experts, customers, and partners to continue your educational journey in the cloud.
Amazon Bedrock Data Automation is a new gen AI–powered capability that transforms unstructured content from documents, images, video, and audio into structured data at scale. It serves use cases ranging from insurance claims processing to media asset management, advertising, and compliance. Amazon Bedrock Data Automation enables simple customization of outputs to generate specific insights in formats compatible with existing systems. Developers can configure Amazon Bedrock Data Automation via the Amazon Bedrock console using sample data and then integrate its unified multimodal inference API into their applications for high-accuracy, consistent processing. Join this chalk talk to learn how Amazon Bedrock Data Automation can help you transform your unstructured content.
Foundation models continue to grow in size with billions or even trillions of parameters, and they often won’t fit into a single accelerator device such as a GPU. Amazon SageMaker distributed training capabilities help you apply advanced parallelization techniques, communication optimizations, and efficient checkpointing strategies to distribute your training workload across hundreds or thousands of GPUs, reducing model training time and cost by up to 20%. Join this session for a deep dive into the infrastructure used to run distributed training at scale. Learn how to integrate Amazon SageMaker training capabilities to reduce the total cost of foundation model development.
Join this chalk talk to explore how to best optimize the usage of Amazon Nova models for application-specific use cases. Learn about the fine-tuning and distillation capabilities of the models and connecting the models to agent frameworks. Also explore how to utilize their function calling capabilities. Finally, discover how to use Retrieval Augmented Generation (RAG) systems to leverage the models' understanding capabilities.
Amazon S3 Metadata is a new feature that accelerates data discovery in Amazon S3 with near real-time object metadata that is stored in fully managed tables optimized for analytic workloads. As objects are added and removed, Amazon S3 Metadata automatically refreshes the metadata to reflect the latest changes. With Amazon S3 Metadata, you can easily generate, store, and query the metadata for your Amazon S3 objects, helping you quickly prepare data for business analytics, real-time inference applications, and more. Join this chalk talk to learn how to get started with Amazon S3 Metadata.
In this chalk talk, learn how the new structured data retrieval capability in Amazon Bedrock Knowledge Bases is empowering organizations to unlock the value of their structured data. The fully managed solution with a natural language to SQL (NL2SQL) module removes the complexity, empowering developers to send natural language queries about their data and receive SQL queries, result sets, or narrative responses—all through a simple API call. Discover how your organization can harness the power of structured data to build the next generation of intelligent applications.
In this chalk talk, learn about the AWS approach to responsible AI (RAI) and forward-looking responsible scaling policies. Hear how AWS defines RAI requirements from first principles (that is, our policies) in collaboration with experts. Learn how AWS instills RAI in our systems from the start, and learn about the AWS approach to the next evolution of policies as the models scale.
Managing and querying data at scale requires building and maintaining external systems to support transactions, table maintenance, and performance optimization, which can be costly and complex. Amazon S3 Tables is purpose-built to store tabular data in Apache Iceberg tables. With Amazon S3 Tables, you can create, store, and manage tables in just a few steps, and query your data with existing AWS Analytics and third-party tools. In this chalk talk, learn how to get started and benefit from automated performance optimizations, simplifying data management as you grow.
In this session, explore Amazon Bedrock’s new built-in RAG evaluation and LLM-as-a-judge model evaluation capabilities, designed to help you improve and productize your Amazon Bedrock Knowledge Bases and foundation models, including custom models and imported models. Discover how to create and manage evaluation jobs and analyze performance with critical quality metrics such as correctness, completeness, and responsible AI metrics such as harmfulness, answer refusal, and stereotyping. This session provides an overview of how to use these features built directly in Amazon Bedrock and how to get started immediately.
Worried about using the right data for analysis? With the new OpenLineage-compatible data lineage feature in Amazon DataZone, you can now trace the origin, transformations, and usage of data in one easy view. Automate lineage capture from AWS Glue, Amazon Redshift, and more to gain deep insights into your data’s journey. Join this session to explore how this powerful feature helps data teams confidently understand and use data to drive business value.
Amazon Keyspaces (for Apache Cassandra) is a globally scalable, serverless, fully managed database service with up to 99.999% availability. In this session, learn how GE Vernova, a leader in electrifying and decarbonizing the world, uses Amazon Keyspaces to store and query a massive 600 TB of industrial time-series data for its Asset Performance Management (APM) software. Dive deep into GE’s 600 TB migration from ScyllaDB to Amazon Keyspaces, and explore the benefits it observed, including improved availability and scalability. Gain insights into the AWS services it utilized and the challenges it overcame during this large-scale migration.
Join this chalk talk to uncover how Amazon Bedrock IDE within Amazon SageMaker Unified Studio empowers users to rapidly build, customize, and verify generative AI applications through governed collaboration. This streamlined experience facilitates generative AI application development with advanced capabilities like agent and flow creation, Retrieval Augmented Generation (RAG), prompt engineering, and guardrails. It provides rapid iteration and optimization for tailored applications. Discover how Amazon Bedrock IDE democratizes generative AI development, fosters cross-functional collaboration, and unlocks its full potential.
With the newfound prevalence of applications built with large language models (LLMs) including features such as Retrieval Augmented Generation (RAG), agents, and guardrails, a responsibly-driven evaluation process is necessary to measure performance and mitigate risks. This session covers best practices for a responsible evaluation. Learn about open access libraries and AWS services that can be used in the evaluation process, and dive deep on the key steps of designing an evaluation plan including defining a use case, assessing potential risks, choosing metrics and release criteria, designing an evaluation dataset, and interpreting results for actionable risk mitigation.
Amazon SageMaker’s inference optimization toolkit helps reduce how long it takes to optimize foundation models and achieve the best price performance for your use case. You can choose from a menu of optimization techniques, apply them to your models, validate performance improvements, and deploy the models in a few clicks. Employing techniques like speculative decoding, quantization, and compilation, SageMaker inference delivers up to two times higher throughput while reducing costs by up to 50% for models like Llama 3, Mistral, and Mixtral. It significantly reduces engineering costs by eliminating the need for research, experimentation, and benchmarking predeployment resources for developers.
Globally, more than half of organizations have increased their investment in generative AI programs over the past year, but the rise of this transformational technology presents a massive data challenge. While the advent of the modern data stack over the past decade unlocked structured data for advanced analytics, until now, there hasn’t been an equivalent set of tooling for the more than 80% of enterprise data that is unstructured. Join this lightning talk to discover how Unstructured is addressing this critical gap and how it can ingest and pre-process all unstructured data into formats ready for use with foundation models. This presentation is brought to you by Unstructured, an AWS Partner.
AWS Step Functions enables builders to orchestrate multiple AWS services to model complex workflows and state machines. Join this lightning talk for a demonstration of Workflow Studio, a low-code, visual editor for Step Functions that allows you to prototype and build workflows faster. Learn about the rich palette of integrations available in Workflow Studio and how to utilize the tool to build your own workflows.