AWS Database Blog

AWS re:Invent 2017 Roundup: All Amazon DynamoDB-Related Sessions for Your On-Demand Viewing

Though AWS re:Invent 2017 concluded two months ago, we still wanted to round up and share all the excellent Amazon DynamoDB session content from the conference. The following table includes the titles of DynamoDB-related sessions and links to the session recordings, session descriptions, and an explanation of whom each session is best for.

AWS re:Invent 2017 session title and link to video Session description Best for
Amazon.com – Replacing 100s of Oracle DBs with Just One: Amazon DynamoDB (ARC406) A mission-critical system used by more than 300 Amazon engineering teams, Herd executes more than four billion workflows every day. Beginning in 2013, Herd’s workflow traffic was doubling every year, and scaling its then dozens of horizontally partitioned Oracle databases was becoming a nightmare. To support Herd’s scaling needs and to provide a better customer experience, the Herd team had to rearchitect its storage system and move its primary data storage from Oracle to Amazon DynamoDB. In this expert-level session, we discuss this migration to DynamoDB, walk through the biggest challenges we faced and how we overcame them, and share the lessons we learned along the way. Anyone who wants to learn about how Amazon.com migrated 100s of Oracle databases to DynamoDB.
Cache Me If You Can: Minimizing Latency While Optimizing Cost Through Advanced Caching Strategies (ATC303) From Amazon CloudFront to ElastiCache to Amazon DynamoDB Accelerator (DAX), this session is your one-stop shop for learning how to apply caching methods to your AdTech workloads. What data should you cache and why? What are common side effects and pitfalls when caching? How should you use DAX in practice? How can you ensure that data always stays current in your cache? We discuss these topics in depth during this session, and we share lessons learned from Team Internet. Anyone who wants to learn more about how to apply caching methods to AdTech workloads.
DynamoDB – What’s New (DAT304) In this general session for Amazon DynamoDB, we cover newly announced features and provide an end-to-end view of recent innovations. We also share some of our customer success stories and use cases, and share live demos of global tables and on-demand backups. Anyone who wants to learn about DynamoDB and what’s new.
Moving a Galaxy into the Cloud: Best Practices from Samsung on Migrating to Amazon DynamoDB (DAT320) In this session, we introduce you to the best practices for migrating databases, such as traditional relational database management systems or other NoSQL databases, to Amazon DynamoDB. We discuss key DynamoDB concepts, evaluation criteria, data modeling in DynamoDB, moving data into DynamoDB, and data migration key considerations. We share a case study of Samsung Electronics, which migrated their Cassandra cluster to DynamoDB for their Samsung Cloud workload. Anyone who wants to learn about how Samsung migrated to DynamoDB.
Expedia Flies with DynamoDB: Lightning-Fast Stream Processing for Travel Analytics (DAT324) Building rich, high-performance streaming data systems requires fast, on-demand access to reference datasets to implement complex business logic. In this session, Expedia discusses the architectural challenges the company faced, and how Amazon DynamoDB Accelerator (DAX) and Amazon DynamoDB fit into the overall architecture and met Expedia’s design requirements. You will learn about: 1) Expedia’s overall architectural patterns for streaming data, 2) How Expedia uniquely uses DynamoDB, DAX, Apache Spark, and Apache Kafka to address issues, and 3) The value that DAX provides and how it enabled Expedia to improve their performance and throughput, and reduce costs—all without having to write any new code. Anyone who wants to hear how Expedia uses DynamoDB and DAX for fast, on-demand access to reference datasets.
Snapchat Stories on Amazon DynamoDB (DAT325) The backend for the Snapchat Stories feature includes Snapchat’s largest storage write workload. Learn how Snapchat rebuilt this workload for Amazon DynamoDB and executed the migration. Safely moving such a critical and high-scale piece of the Stories infrastructure to a new system right before yearly peak usage led to interesting challenges. In this session, we cover data model changes to use DynamoDB strengths and improve both performance and cost. We also cover challenges and risks in making remote calls across cloud providers, dealing with issues of scale, forecasting capacity requirements, and mitigating the risks of taking an unproven system through the dramatic traffic spikes that occur on New Year’s Eve. Anyone who wants to learn how Snapchat uses DynamoDB with Snapchat Stories.
How DynamoDB Powered Amazon Prime Day 2017 (DAT326) Sales on Prime Day 2017 surpassed Black Friday and Cyber Monday, making it the biggest sales day ever in Amazon history. An event of this scale requires infrastructure that can easily scale to match the surge in traffic. In this session, learn how AWS and Amazon DynamoDB powered Prime Day 2017. DynamoDB requests from Amazon Alexa, the Amazon.com sites, and the Amazon fulfillment centers peaked at 12.9 million per second, a total of 3.34 trillion requests. Learn how the extreme scale, consistent performance, and high availability of DynamoDB let Amazon.com meet the needs of Prime Day without breaking a sweat. Anyone who wants to learn how Amazon.com used DynamoDB to fulfill trillions of requests on Amazon Prime Day 2017
DynamoDB Adaptive Capacity: Smooth Performance for Chaotic Workloads (DAT327)
Database capacity planning is critical to running your business, but it’s also challenging. In this session, we compare how scaling is usually performed for relational databases and NoSQL databases. We also discuss how DynamoDB shards your data across multiple partitions and servers. Finally, we cover some of the recent enhancements to DynamoDB that make scaling even simpler, particularly a new feature called adaptive capacity that eliminates much of the throttling issues that you may have experienced. Anyone who wants to learn more about database capacity planning and DynamoDB.
Tinder & DynamoDB: It’s a Match! Massive Data Migration, Zero Downtime (DAT328) Are you considering a massive data migration? Do you worry about downtime during a migration? Dr. JunYoung Kwak, Tinder’s lead engineering manager, shares his insights into how Tinder successfully migrated critical user data to Amazon DynamoDB with no downtime. Join us to learn how Tinder leverages DynamoDB performance and scalability to meet the needs of their growing global user base. Anyone who wants to learn how Tinder migrated critical user data to DynamoDB with no downtime.
Advanced Design Patterns for Amazon DynamoDB (DAT403) In this expert-level session, we go deep into advanced design patterns for Amazon DynamoDB. The patterns and data models discussed in this presentation summarize a collection of implementations and best practices used by Amazon to deliver highly scalable solutions for a wide variety of business problems. We discuss strategies for the sharding of global secondary indexes and index overloading, scalable graph processing with materialized queries, relational modeling with composite keys, and executing transactional workflows on DynamoDB. This session is intended for those who already have some familiarity with DynamoDB.
Optimizing Serverless Application Data Tiers with Amazon DynamoDB (SRV301)

As a fully managed database service, Amazon DynamoDB is a natural fit for serverless architectures. In this session, we dive deep into why and how to use DynamoDB in serverless applications, followed by a real-world use case from Capital One.

First, we discuss the relevant DynamoDB features and how you can use them effectively with AWS Lambda in solutions ranging from web applications to real-time data processing. We also show how some of the new features in DynamoDB, such as Auto Scaling and Time to Live (TTL), are particularly useful in serverless architectures, and distill the best practices to help you create effective serverless applications. In the second part of the session, we discuss how Capital One migrated billions of transactions to a completely serverless architecture and built a scalable, resilient, and fast transaction platform by leveraging DynamoDB, Lambda, and other services within the serverless ecosystem.

Anyone who wants to know how to use DynamoDB in serverless applications.

About the Author

Craig Liebendorfer is a senior technical editor at Amazon Web Services.