AWS Database Blog

Tuesday, November 27: Amazon DynamoDB sessions and workshops at re:Invent

This blog post includes the Amazon DynamoDB sessions, workshops, and chalk talks taking place today at AWS re:Invent 2018. You also can see this list in the live session catalog. Follow us on @DynamoDB for re:Invent and other tweets.

Tuesday, November 27

8:30 AM

DAT404-R – Advanced Design Patterns for Amazon DynamoDB – Workshop

This hands-on workshop is designed for developers, engineers, and database administrators who are involved in designing and maintaining DynamoDB applications. We begin with a walkthrough of proven NoSQL design patterns for at-scale applications. Next, we use step-by-step instructions to apply lessons learned to design DynamoDB tables and indexes that are optimized for performance and cost. Expect to leave this session with the knowledge to build and monitor DynamoDB applications that can grow to any size and scale. Before starting the workshop, you should have a basic understanding of DynamoDB. Please bring your laptop and power supply to the workshop.

11:30 AM

DAT357-R – Build Internet-Scale Apps with Amazon DynamoDB

DynamoDB is a nonrelational database that delivers reliable performance at any scale. It’s a fully managed, multi-Region, multi-master database that provides consistent single-digit millisecond latency and offers built-in security, backup and restore, and in-memory caching. Come to this session to learn how to build internet-scale applications with DynamoDB.

1:45 PM

DAT333 – Real-World Use Cases for Amazon DynamoDB

Build (or test!) your DynamoDB chops in this chalk talk where we work together to design data models and solutions for real-world use cases using DynamoDB. Share your experiences and best practices, and ask questions of DynamoDB experts.

2:30 PM

DAT311-R – Building Serverless Applications with Amazon DynamoDB & AWS Lambda – Workshop

Join us for this first-ever advanced design and best practices workshop, which is designed to demonstrate the breadth of AWS serverless offerings and how the components work together. In this interactive workshop, we review the evolution of an e-commerce company that starts with a low-effort serverless product catalog, scales to a million daily users, and then adds analytics and near-real-time monitoring. As we progress through the workshop, we dive deeply into AWS serverless services, such as DynamoDB, AWS Lambda, and Amazon Kinesis. We also use Amazon S3, Amazon API Gateway, Amazon Cognito, and other services that enable you to optimize costs and improve performance. Basic knowledge of DynamoDB, Lambda, and Kinesis is required. Please bring your laptop and power supply to this session.

DAT320 – Becoming a Nimble Giant: How Amazon DynamoDB Serves Nike at Scale

In this session, learn how Nike Digital migrated their large Cassandra and Couchbase clusters to fully managed DynamoDB. We share how Cassandra and Couchbase proved to be operationally challenging for engineering teams and failed to meet the needs to scale up for high-traffic product launches. We discuss how the flexible data model of DynamoDB allows Nike to focus on innovating for our consumer experiences without managing database clusters. We also share the best practices we learned for effectively using DynamoDB Time to Live (TTL), auto scaling, on-demand backups, point-in-time recovery, and adaptive capacity for applications that require scale, performance, and reliability to meet Nike’s business requirements.

3:15 PM

DAT326 – The Amazon.com Database Journey to AWS – Top 10 Lessons Learned

In this session, we share the top 10 lessons learned from migrating the online transaction processing (OLTP) and data warehouse (DW) databases used by Amazon.com to AWS services, such as Amazon Relational Database Service (Amazon RDS), Amazon Aurora, Amazon Redshift, and Amazon DynamoDB. We discuss the challenges associated with operating and managing legacy OLTP and DW databases at Amazon.com scale and how the Amazon.com team successfully executed the database freedom program across different organizations and geographies.

DAT349 – Deep Dive on Amazon DynamoDB Global Tables

Amazon DynamoDB global tables provide you with a fully managed, multi-region, and multi-master database. With global tables, you can replicate table data to multiple AWS Regions for higher availability and provide your applications local access to DynamoDB tables for fast read and write performance. In this chalk talk, we dive deep on keys to success when designing global tables. Learn how to manage throughput capacity for your global table correctly, and get a deep understanding of how global table replication works. We also walk through reference architectures and examples that you can take with you to help you build and optimize your own global applications.

4:00 PM

DAT352-R – Migrate Your Nonrelational Database to AWS

In this session, learn how to migrate a nonrelational database, such as Cassandra or MongoDB, to DynamoDB. We review how AWS DMS and the AWS SCT can help you migrate quickly and securely, and we show you how DynamoDB implements the functionality of your source database.

4:45 PM

DAT341 – Migrating Financial and Accounting Systems from Oracle to Amazon DynamoDB

In this session, we discuss our learnings from migrating the financial ledger and accounting system that Amazon uses from Oracle to AWS. We share the performance and cost benefits to enterprises who migrate critical systems from Oracle to AWS. We also discuss the decision frameworks used to choose the appropriate AWS service for the appropriate application, and best practices in project management when migrating databases.

5:30 PM

DAT401 – Amazon DynamoDB Deep Dive: Advanced Design Patterns for DynamoDB

In this expert-level session, we cover patterns and data models that summarize a collection of implementations and best practices that Amazon.com uses to deliver highly scalable solutions for a wide variety of business problems. We also cover strategies for global secondary index sharding and index overloading, scalable graph processing with materialized queries, relational modeling with composite keys, and executing transactional workflows on DynamoDB.