AWS Innovate Data Edition
Dorong ketangkasan, transformasi digital, dan inovasi berkelanjutan dengan data Anda sekarang juga.

50+

Sesi
Ikuti
semua sesi
sesuai permintaan
Lab Pratik
Langsung
Panduan
Studi Kasus
Pelanggan
Pelajari selengkapnya
E-book dan
panduan AWS
Baca sekarang

Menginovasi kembali bisnis Anda dengan data

Selamat datang di Konferensi Online AWS Innovate - Edisi Data, acara virtual gratis yang dirancang untuk menginspirasi dan memberdayakan Anda dalam pengambilan keputusan yang lebih baik dan berinovasi lebih cepat dengan data. Pelajari cara mengungkap wawasan kritis dari data Anda dan membuat landasan data yang kuat untuk inovasi cepat serta mencapai ketangkasan, skalabilitas, dan penghematan biaya.

 Mengapa Anda perlu hadir?

Bergabung dengan kami yang akan menjelaskan pengumuman, teknologi, dan inovasi terbaru AWS. Pahami secara mendalam kasus penggunaan bisnis, arsitektur, dan praktik terbaik modernisasi.

 Kepada siapa ini ditujukan?

Baik Anda baru memulai dengan AWS, pengguna mahir, eksekutif bisnis, atau hanya ingin tahu, kami memiliki jalur khusus untuk tingkat pengalaman dan peran pekerjaan Anda.


Agenda

Dapatkan inspirasi dan pelajari cara menggunakan data untuk mempercepat inovasi serta mendorong ketangkasan dan efisiensi yang lebih besar untuk organisasi Anda. Pahami secara mendalam salah satu dari 50+ sesi bisnis dan teknis yang dipandu oleh para ahli AWS saat mereka berbagi konsep inti, kasus penggunaan bisnis, dan praktik terbaik untuk membantu Anda menghemat waktu dan biaya pengelolaan data, menghilangkan silo data, mendapatkan wawasan yang akurat lebih cepat, dan membangun landasan data yang kuat untuk inovasi cepat.

Agenda
 Klik di sini untuk melihat agenda yang dapat diunduh »

Pilih Trek:

  •  Sesi Bahasa Indonesia
  •  Sesi Bahasa Inggris
  •  Sesi Bahasa Indonesia
  • Persiapan data tanpa kode dengan AWS Glue DataBrew - bersihkan dan normalkan data/Glue ETL (Level 200)
    Analis data dan ilmuwan data telah menghabiskan sebagian besar waktu mereka menjalankan tugas persiapan data secara manual untuk kasus penggunaan analitik dan pemodelan ML guna mendapatkan data yang bersih dan diformat untuk memenuhi kebutuhan mereka. Karena data terus berkembang dalam ukuran dan kompleksitas, organisasi di seluruh dunia berada di bawah tekanan untuk menambah jumlah orang dan mengurangi waktu untuk mempersiapkan dan membuka nilai dari data mereka. Dalam sesi ini, pelajari lebih dalam tentang AWS Glue DataBrew, alat persiapan data visual baru yang memungkinkan analis data dan ilmuwan data membersihkan dan menormalkan data hingga 80% lebih cepat tanpa menulis kode apa pun. Kami menjelaskan cara kerja AWS Glue DataBrew termasuk transformasi populer dan memberikan panduan dengan kasus penggunaan untuk persiapan data di seluruh penyimpanan data.

    Pembicara: Megah Fadhillah, Public Sector Solution Architect, AWS
    Durasi: 30 menit


    Integrasi data yang disederhanakan dengan AWS Glue (Level 200)
    Integrasi data bukan hanya tentang migrasi data. Ini adalah proses yang memungkinkan alur kerja untuk merampingkan komunikasi antara sistem dan komponen yang memungkinkan Anda mengekstrak nilai maksimum dari data Anda. Dalam sesi ini, pelajari bagaimana AWS Glue memberi pelanggan opsi ekstrak, transformasi, dan pemuatan (ETL) tanpa server yang terkelola sepenuhnya untuk menyederhanakan arsitektur integrasi data.

    Pembicara: Donnie Prakoso, Senior Developer Advocate, AWS
    Durasi: 30 menit


    Migrasikan gudang data lokal Anda ke Amazon Redshift (Level 200)
    Sebagian besar perusahaan dibanjiri data, tetapi kekurangan wawasan kritis untuk membuat keputusan bisnis yang tepat waktu dan akurat karena arsitektur gudang data tradisional yang memakan biaya besar, rumit, dan kaku. Mereka melewatkan peluang untuk mengombinasikan sejumlah besar big data baru dan tidak terstruktur yang berada di luar gudang data dengan data yang tepercaya dan terstruktur di dalam gudang data mereka. Amazon Redshift menawarkan optimalisasi untuk set data yang berkisar dari beberapa ratus gigabyte hingga petabyte atau lebih tinggi dan berbiaya kurang dari 1.000 USD per terabyte setiap tahunnya, sepersepuluh dari biaya sebagian besar solusi gudang data tradisional dan memperluas kueri untuk danau data tanpa perlu memindahkan data. Dalam sesi ini, kami mendiskusikan bagaimana migrasi ke Amazon Redshift bisa memungkinkan Anda mendapatkan penskalaan dan kinerja dengan harga yang lebih baik, mengotomatiskan tugas administratif, mengintegrasikan danau data Anda secara native, dan memungkinkan Anda menganalisis format data terbuka tanpa perlu memuat, mengubah, atau memindahkan data.

    Pembicara: Rudi Suryadi, Solutions Architect Manager Indonesia, AWS
    Durasi: 30 menit


    Database yang dibuat khusus: Pilih alat yang tepat untuk setiap pekerjaan (Level 200)
    Jarang satu database dapat memenuhi kebutuhan beberapa kasus penggunaan yang berbeda. Hari-hari basis data monolitik satu ukuran untuk semua telah berlalu dan aplikasi yang sangat terdistribusi menggunakan banyak basis data yang dibuat khusus. Dunia sedang berubah, dan kategori database terus berkembang. Kami semakin banyak melihat pelanggan yang ingin membangun aplikasi berskala Internet yang membutuhkan model data yang beragam. Menanggapi kebutuhan ini, kita sekarang memiliki pilihan database relasional, nilai kunci, kolom lebar, dokumen, dalam memori, grafik, deret waktu, dan buku besar. Masing-masing memecahkan masalah atau kelompok masalah tertentu. Dalam sesi ini, pelajari bagaimana database AWS yang dibuat khusus memenuhi persyaratan skala, kinerja, dan pengelolaan aplikasi modern.

    Pembicara: Petra Barus, Senior Developer Advocate, AWS
    Durasi: 30 menit


    Akselerasi proyek analisis data dengan membangun Data lake di AWS (Level 300)
    Memanfaatkan data dan analitik adalah kunci untuk meningkatkan keunggulan kompetitif organisasi, karena memungkinkan pemahaman yang lebih dalam tentang pelanggan dan pengambilan keputusan berdasarkan data. Untuk membangun kemampuan analitik data dengan cepat, bisnis harus mulai dari yang kecil dan berskala cepat, dan dapat melakukannya dengan membangun data lake. Dengan AWS, organisasi dapat membangun pipeline data lake serverless dengan layanan seperti AWS Glue, Amazon Athena, dan Amazon S3.

    Pembicara: Rudi Suryadi, Solutions Architect Manager Indonesia, AWS
    Durasi: 30 menit


    Arsitektur Lake House (Level 200)
    Data diperkirakan akan mencapai 175 zettabytes secara global pada tahun 2025. Kami membutuhkan cara yang gesit dan hemat biaya untuk menganalisis data yang terus berkembang dengan sedikit waktu untuk memahami, terlepas dari format dan tempat data disimpan. Dalam sesi ini, kita akan mempelajari lebih dalam tentang bagaimana Amazon Redshift memperkuat paradigma data lake house – memungkinkan Anda untuk melakukan kueri data di seluruh gudang data, data lake, dan database operasional untuk mendapatkan wawasan yang lebih cepat dan lebih dalam.

    Pembicara: Donnie Prakoso, Senior Developer Advocate, AWS
    Durasi: 30 menit

  •  Sesi Bahasa Inggris
    • Opening and Closing
    • Data-driven organizations
    • Modernize data infrastructure: Data movement & management
    • Modernize data infrastructure: Databases
    • Unify your data: Accelerating insights
    • Innovate with analytics & ML
    • Opening and Closing
    • ico-innovate-data-keynote

      Opening keynote

      Reinventing business with data (Level 100)
      Effective leaders rely on data to make informed decisions, look around corners, and take meaningful action. They build a modern data strategy to deliver insights to the people and applications that need it, securely and at any scale. Uncover the latest in database, data, analytics and AI/ML and get insights on how organizations are harnessing the power of data to accelerate innovation in their businesses. Jumpstart and build the modern data strategy that allows you to consolidate, store, curate, and analyze data at any scale, as well as share data insights with everyone who needs them.

      Speaker: Rahul Pathak, Vice President of Analytics, AWS
      Duration: 60mins


      ico-innovate-data-close-keynote2

      Closing keynote

      Rapid innovation with data analytics and machine learning (Level 200)
      To make decisions quickly, organizations want to store any amount of data in open formats, break down disconnected data silos, empower people to run analytics or machine learning using their preferred tool or technique, and manage who has access to specific pieces of data with the proper security and data governance controls. Learn how technology like machine learning and analytics can unlock opportunities that were either too difficult or impossible to do before, enabling organization with insights, transforming industries and reshaping how customers consume and engage with products and services.

      Speaker: Olivier Klein, Chief Technologist, APJ, AWS
      Duration: 45mins

    • Data-driven organizations
    • ico-innovate-data-reinventing_your

      Data-driven organizations

      About the track

      Get inspired and learn how organizations are using AWS to solve business challenges, optimize business performance, and innovate faster. Start leveraging your data as a strategic asset and reinvent your business with data today.

      Building a smarter, faster business with a modern data strategy (Level 100)
      Data is at the core of every business and organizations to make informed decisions, look around corners, and take meaningful action. Building a data strategy is imperative and pivotal for businesses and organizations to stay relevant now, and in the future. This session covers across various industries how our customers have learnt and experimented over the past months using agile technology to unlock insights from their data and champion a culture of innovation to build for the future. Learn how modern data strategy inspires business innovation, transforms customer experiences, and improves business outcomes.

      Speaker: Olivier Klein, Chief Technologist, APJ, AWS
      Duration: 45mins


      Transform your data approach - Develop your modern data strategy (Level 100)
      While every organization has different data needs, disparate data sources, or unreliable data pipelines forming data silos and a lack of knowledge to harness their data goldmines into actionable insights, with the right modern data strategy, you can eliminate data silos, modernize your data and analytics architecture, and unlock the real value from your data to create the foundation for rapid innovation. Get insights and learn how to turn your data into a tool for innovation including achieving agility, scalability, and cost-savings.

      Speaker: Blair Layton, Transformation Business Development Manager, AWS
      Duration: 30mins


      Reinventing data and analytics strategy for media and entertainment with AWS (Level 100)
      Media and entertainment customers face industry-wide transformation, with companies reinventing how they create content, optimize media supply chains, and compete for audience attention across streaming, broadcast, and direct-to-consumer platforms. In this session, learn how to align our purpose-built capabilities for the different solution areas and help customers to transform content production, media supply chain, broadcast, direct-to-consumer & streaming, and analytics. With AWS, you can select the right tools to accelerate production launches, and see faster time to value.

      Speakers: 
      Christer Whitehorn, Lead Solutions Architect Media Services, APJC, AWS
      Sumit Kalia, Storage Specialist, ANZ, AWS
      Duration: 30mins


      How data literate is your organization (Level 100)
      Enterprises are sitting on data goldmines they can harness for unique business value and competitive advantage. But as most leaders know, becoming a data driven organization is much easier said than done. In this session, we discuss the common characteristics of a data-driven organization, and actions leaders can take to build one.

      Speaker: Babak Darashte, Head of Data & Analytics Solutions Architecture, APJ, AWS
      Duration: 30mins


      How AXA Singapore transformed into a data-driven enterprise through a modern data platform on AWS (Level 100)
      Organizations are on a data-driven journey to use actionable insights, analytics, and AI/ML to create new revenue streams, improve customer experience, and increase operational efficiency. AXA is one such organization that is leveraging analytics to transform from being an Insurance Provider to a Business Partner for their customers. In this session, learn how AXA fostered a customer-centric and “data-driven” self-service culture to enable their transformation.

      Speakers: 
      Ganamanas Das, Senior ASEAN GTM Specialist, AWS
      Romain Bourgeois, Head of Data Foundations, AXA Insurance Singapore
      Duration: 30mins


      Solving business challenges with your data (Level 100)
      Business impact driven by data insights has deeply implicated greater industry and citizen efficiencies. Discover the advantages of how Intel’s compute, powers AWS platforms to deliver better price performance for data analytics workloads while achieving faster time to data insights.

      Speaker: Akanksha Balani, AWS APJ Alliance head, Intel
      Duration: 30mins

    • Modernize data infrastructure: Data movement & management
    • ico-innovate-data-modernize_your

      Modernize data infrastructure: Data movement & management

      About the track

      Gain best practices and concepts around data movement, eliminating data silos, and analyzing diverse datasets easily while keeping your data secure. Find out how to easily capture and centralize your data in a quick, cost-effective, and secure fashion using AWS.

      Introduction to data movement and management (Level 100)
      We are increasingly seeing our customers turn to AWS to help them with better ways to move and manage their data from efficient management of costs, improve business agility to protection of their strategic data assets. In this session, we introduce the Data Movement and Management track and share how we are enabling our customers to generate insights from their datasets, busting data migration myths, securing the data, and implement best practices for data management across different industries.

      Speaker: Luke Anderson, Head of Data Management and Storage, AWS
      Duration: 10mins


      A journey of analytic data to a warm lake, hot insights, and frozen glacier (Level 200)
      Traditional data warehouses struggle with vast amounts of data - it costs too much or data is siloed. This prohibits customers from innovating and extracting value from their data. Customers want to analyze their operations quicker and operationalize insights faster. In this session, we share how you can maximize performance and minimize the costs of your analytical data journey, from ingest, to processing, to archival for both structured and unstructured datasets, with a data heatmap classification. We focus on how you can combine services like Amazon Redshift for Data Warehousing, or Amazon FSx for Lustre for HPC Analytics, with Amazon S3, which forms the heart of the data lake for our customers.

      Speakers: 
      Wali Akbari, Senior Storage Solutions Architect, APJ, AWS
      Evgeny Minkevich, Senior Solutions Architect, AWS
      Duration: 30mins


      Migration myth busting on moving your data to AWS (Level 200)
      One of the most challenging steps in migrating to the cloud can be data migration – and these steps are often met with many myths and perceptions around the end-to-end process. With many tools and services now available to assist you, choosing the right methodology is an important process. AWS offers the most complete set of solutions for moving your data to the cloud that are optimized for two key variables – cost and time. In this session, we focus on busting some myths associated to data migration such as cost, time, network capabilities, online vs offline, data integrity, and migration tools. Learn which technologies and methods are best for your various needs, and how they work.

      Speaker: Naim Mucaj, Storage Specialist Solutions Architect, AWS
      Duration: 30mins


      Data Lake security in Amazon S3 (Level 200)
      As you build a data lake on Amazon S3, managing security and access is essential. You require granular access control for your data with strong controls around authentication, authorization, encryption, and auditing. At the same time, you require strong guardrails that protect your data from outside access, at scale. Amazon S3 provides enhanced data security features in the cloud, on both ends of this spectrum. In this session, get guidance on the mechanisms you use on AWS, from identity to networking control, to maintain tight control over your data.

      Speaker: Kumar Nachiketa, Storage Partner Solutions Architect APJ, AWS
      Duration: 30mins


      Build a secured, mutable, high performance data lake in days with AWS Lake Formation (Level 200)
      Organizations are breaking down data silos and building petabyte-scale data lakes on AWS. Since its launch, AWS Lake Formation has accelerated data lake adoption by making it easy to build and secure for data lakes. In this session, we dive deep into a few existing and upcoming features in AWS Lake Formation to solve common data lake challenges, such as mutability, file compaction and enforcing consistent fine-grained access to data across different user personas from a wide range of analytics and ML services. Learn how AWS Lake Formation allows you to share data securely within your organization and your partners without duplication or data movement in just a few clicks.

      Speaker: Niladri Bhattacharya, Senior Analytics Specialist Solutions Architect, AWS
      Duration: 30mins


      Data preparation made easy with AWS Glue DataBrew (Level 300)
      Data analysts and data scientists have spent a bulk of their time running data preparation tasks manually for analytics and ML modelling use cases to get clean and formatted data to meet their needs. As data continues to grow in size and complexity, organizations around the world are under pressure to expand the number of people and reduce the time to prepare and unlock value from their data. In this session, dive deep into AWS Glue DataBrew, a new visual data preparation tool that enables data analysts and data scientists to clean and normalize data by up to 80% faster without writing any code. We explain how AWS Glue DataBrew works including popular transformations and provide a walkthrough with use cases for data preparation across the data stores.

      Speaker: Vikas Omer, Senior Analytics Specialist Solutions Architect, AWS
      Duration: 30mins


      Simplify data integration with AWS Glue (Level 200)
      Data integration is not simply about migrating data. It is the process that enables workflows to streamline communications between systems and components which allows you to extract the maximum value from your data. In this session, learn how AWS Glue provides customers with fully managed serverless extract, transform, and load (ETL) options to simplify data integration architecture.

      Speaker: Tom McMeekin, Solutions Architect, AWS
      Duration: 30mins

    • Modernize data infrastructure: Databases
    • ico-innovate-data-data_movment

      Modernize data infrastructure: Databases

      About the track

      In this track, find out how AWS cloud databases can help you meet your distinct use cases all while delivering operational efficiency, performance, availability, scalability, security, and compliance.

      Build the future of your business with AWS database (Level 100)
      Data is at the core of every application, and companies are looking to use data as the foundation for future innovation in their applications and their organizations. This session provides an overview for the modernize data architecture with AWS databases track including how to jumpstart and build for the future with our purpose built databases.

      Speaker: Tony Gibbs, Databases Solutions Architect Leader, APJ, AWS
      Duration: 10mins


      Purpose-built databases: Choose the right tool for each job (Level 200)
      Seldom can one database fit the needs of multiple distinct use cases. The days of the one-size-fits-all monolithic database are behind us and highly distributed applications are using many purpose-built databases. The world is changing, and the categories of databases continue to grow. We are increasingly seeing customers wanting to build Internet-scale applications that require diverse data models. In response to these needs, we now have the choice of relational, key-value, wide column, document, in-memory, graph, time-series, and ledger databases. Each solves a specific problem or group of problems. In this session, learn how AWS purpose-built databases meet the scale, performance, and manageability requirements of modern applications.

      Speaker: William Wong, Senior Database Solutions Architect, AWS
      Duration: 30mins


      Transform your Oracle database operating model by leveraging AWS managed databases (Level 200)
      Organizations are looking to free up their valuable Oracle DBA resources and shift their efforts to activities that are business enablers. The best way to do that is to leverage the power of fully managed databases in the cloud. This session shows why you should consider moving your self-managed Oracle Databases to Amazon RDS for Oracle and how you can achieve it in the most efficient way. We showcase the benefits of a managed database operating model, discuss some transformative cost saving techniques, and walk through some best practice migration patterns. We demonstrate how to migrate an on-premise Oracle database to Amazon RDS Oracle in AWS with minimal downtime. We will also share the key ingredients for migration success, walk through the tools and mechanisms available to deliver a smooth transition, and introduce the additional benefits achievable by migrating from Oracle to a purpose-built database like Amazon Aurora Postgres.

      Speaker: Matt McClernon, Solutions Architect, Oracle, AWS
      Duration: 40mins


      SQL Server on AWS: Choosing the right strategy (Level 300)
      Customers have been running Microsoft Workloads on AWS for over 12 years, longer than any other cloud provider. In this session, learn about the journey of migrating Microsoft SQL Server to AWS. While there is no one-size-fits-all approach for migrating SQL Server workloads to the cloud, there are three primary ways to move SQL Server-based applications to AWS. You can choose to lift and shift to Amazon EC2 or move to fully managed Amazon RDS or modernize with AWS purpose built databases. We explain the benefit of each approach to make the right choice and illustrate on the multiple architectural patterns for efficient migration of Microsoft SQL Server to different target platforms in AWS. The session also walks through available tools and services like AWS Database Migration Service, AWS Schema Conversion Tool, AWS Snow devices, AWS Storage Gateway and AWS DataSync to support you in your migration journey. Uncover and learn more about strategies and best practices to migrate or modernize your SQL Server workloads on AWS.

      Speaker: Rita Ladda, Senior Database Solutions Architect, AWS
      Duration: 30mins


      Architecting on AWS managed databases like a rockstar! (Level 300)
      Self-managing databases, either on-premises or in the cloud, can be time-consuming, complex, and expensive. With AWS fully managed database services, such as Amazon RDS, Amazon Aurora, Amazon Neptune, Amazon ElastiCache, and Amazon DocumentDB, you do not need to worry about server provisioning, patching, setup, configuration, backups, or recovery. Instead, you can spend time innovating and building new applications and leave the infrastructure management to AWS. In this session, learn how to easily migrate and maintain database infrastructure and break free from managing legacy systems using AWS managed databases, all while delivering operational efficiency, performance, availability, scalability, security, and compliance. We demonstrate the benefits of using custom built processers that deliver the best price performance for your cloud workloads. To conclude, we explain and provide a walk through on the best practices in architecting critical databases on Amazon Aurora.

      Speaker: Roneel Kumar, Senior Database Solutions Architect, AWS
      Duration: 40mins


      Build scalable, global applications with NoSQL databases (Level 200)
      When building global applications, low latency access to data is critical. Many organizations are revamping their data strategy as part of their digital transformation initiatives. Application modernization is a common theme for these initiatives, and a highly available, highly scalable database is a key requirement to succeed. Amazon DynamoDB and Amazon DocumentDB are designed from the ground up as fully managed, scalable, and purpose-built database services engineered for mission-critical workloads. In this session, learn how you can leverage Amazon DocumentDB and Amazon DynamoDB to build globally distributed applications on AWS.

      Speakers: 
      Sean Shiriver, Senior DynamoDB Specialist Solutions Architect, AWS
      Karthik Vijayraghavan, Senior DocumentDB Solutions Architect, AWS
      Duration: 45mins


      Extreme performance at cloud scale: Supercharge your real-time applications with Amazon ElastiCache (Level 200)
      With the explosive growth of business-critical, real-time applications, performance is one of the top considerations for companies across industries. Learn how you can improve performance to impact both revenue and customer satisfaction through a distributed cache with Amazon ElastiCache for Redis. ElastiCache is a fully managed in-memory caching service that enables microsecond latency and flexible scaling for real-time applications. In this session, discover how caching can supercharge your workloads to and best practices for getting started with ElastiCache.

      Speaker: Damon Lacaille, Senior ElastiCache Specialist Solutions Architect, AWS
      Duration: 30mins

    • Unify your data: Accelerating insights
    • ico-innovate-data-unify_your

      Unify your data: Accelerating insights

      About the track

      Learn the approaches, tools, and frameworks to break down data silos, unify your data, and make data more accessible to everyone who needs it and can seamlessly discover, access, and analyze the data in a secure and governed way.

      Harness the power of your data with AWS analytics (Level 100)
      Data is growing exponentially, coming from various sources, is increasingly diverse, and needs to be securely accessed and analyzed by any number of applications and people. Learn how analytics on AWS and the Lake House architecture puts the right tools in your hands to create delightful customer experiences and data-driven organizations.

      Speaker: Francis McGregor-Macdonald, Principal Analytics Specialist Solutions Architect, AWS
      Duration: 10mins


      Analytics services optimized for your use case (Level 200)
      AWS gives you the broadest and deepest portfolio of purpose-built analytics services optimized for your unique analytics use cases. These services are all designed to be the best in class, which means you never have to compromise on performance, scale, or cost when using them.

      Speaker: Francis McGregor-Macdonald, Principal Analytics Senior Solutions Architect, AWS
      Duration: 30mins


      Break free from the constraints of on-premises data warehouses with Amazon Redshift (Level 200)
      Customers with on-premises data warehouses find them complex and expensive to manage, especially with respect to data load and performance. With Amazon Redshift, the most popular data warehouse solution in the cloud, get ready to scale storage and compute independently, increase available computing power under the hood and manage unpredictable workload spikes, automatically optimize throughput, including taking care of undifferentiated heavy lifting and let you focus on addressing challenges that are core to your business. In this session, learn how customers have made the move, the migration tools offered by AWS, and new features in Amazon Redshift that makes migration faster and easier.

      Speaker: Marco Ullasci, Senior Account Solutions Architect, AWS
      Duration: 30mins


      New data warehousing use cases with Amazon Redshift (Level 200)
      As data grows, we need innovative approaches to get insight from all the information at scale and speed. You can use new Redshift features to do more with your data. Amazon Redshift powers the lake house architecture – enabling you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. You can use the Redshift data API makes to build web-services based applications that access your data using Python, Node.js, and other languages supported by the Amazon SDK. You can simplify JSON and semi-structured data handling with the SUPER data type. You can enable data as a service, cross group collaboration, and workload isolation with data sharing. And you can run scan, filtering, and aggregation operations up to 10X faster than other enterprise cloud data warehouses with AQUA for Redshift. Join this session and learn about these new features and use cases.

      Speaker: Kerry McRae, Solutions Architect, AWS
      Duration: 30mins


      Modernize log analytics with Amazon Elasticsearch Service (Level 200)
      Organisations of all sizes use Amazon Elasticsearch Service to handle their log analytics workloads. With its deep integration with AWS native services and open source frameworks, customers can build robust, cost effective, and scalable log analytics solutions to process from several GBs to petabytes of data. In this session, we will review the best practices and design patterns to build scalable and secure log analytics solutions using Amazon Elasticsearch Service.

      Speaker: Muhammad Ali, Senior Analytics Specialist Solutions Architect, AWS
      Duration: 30mins


      Beyond batch processing - real-time analytics at scale with Apache Flink (Level 300)
      Real-time analytics are on the rise. Apache Flink is a popular purpose-built framework and distributed processing engine for large scale low latency data processing in real-time. In this session, we cover a brief overview of this popular framework, and a demo to build a real-time sentiment analysis application to analyze customer feedback with Apache Flink on Amazon Kinesis Data Analytics Studio.

      Speaker: Masudur Rahaman Sayem, Analytics Specialist Solutions Architect, AWS
      Duration: 30mins


      Move Apache Spark , Hadoop & other big data applications to the cloud with Amazon EMR (Level 300)
      Organizations can use Amazon EMR to run petabyte-scale analysis using Apache Spark, Hadoop and other open source frameworks at less than half of the cost of traditional on-premises solutions. Amazon EMR makes it easy to set up, operate, and scale big data environments by automating time-consuming tasks like provisioning capacity and tuning clusters, and runs applications faster than open source Apache Spark. Join this session to learn more about Amazon EMR and why it is the best place for your analytics workloads.

      Speaker: Melody Yang, Senior Analytics Specialist Solutions Architect, AWS
      Duration: 30mins

    • Innovate with analytics & ML
    • ico-innovate-data-invent_new

      Innovate with analytics & ML

      About the track

      Discover the various machine learning integration services available on AWS that can help you build, deploy, and innovate at scale. We also focus on how AI services are applied to common use cases such as personalized recommendations, identity verification, and automating document processing workflow.

      Build new experiences and reimagine processes with Analytics and Machine Learning (Level 100)
      In this session, we show you how to reinvent new experiences and reimagine old processes with Analytics and AI & Machine Learning (ML) to match your business and organization needs. We explain how customers turn their data into insights using a modern data architecture and focus on the different ways ML is added to AWS database and analytics services to help practitioners use ML without prior knowledge.

      Speaker: Anna Coniglio,  Manager Analytics Specialist Solutions Architect, AWS
      Duration: 10mins


      Democratizing Insights: Scaling machine learning beyond the data scientist (Level 200)
      Machine learning is one of the most disruptive technologies of our generation, and is key to unlocking deep insights never imagined before. However, Machine Learning has traditionally been seen as the domain of experts with PhDs and mathematics degrees and a limited market of skilled professionals has become a barrier for organization wanting to scale initiatives. With trends such as no/low code applications and augmented analytics, a new wave of data and analytics tools is emerging, enabling the development of sophisticated insights with little upskilling required. In this session learn more about scaling ML beyond the role of the data scientit and see a demonstration of three AWS tools you can leverage to enable insights beyond your traditional skillset: Amazon Redshift ML, Amazon Quicksight Q and Amazon Quicksight ML Insights.

      Speaker: Anna Coniglio,  Manager Analytics Specialist Solutions Architect, AWS
      Duration: 30mins


      Embedding analytics into your SaaS applications with Amazon QuickSight (Level 200)
      As businesses become more data driven there is a growing importance on providing users with quick and seamless access to these actionable insights. By moving information closer to the primary user experience through applications and portals, data practitioners are looking to deliver insights to their users where they are. In this session, learn how to bring data closer to your users by seamlessly embedding analytics within applications using Amazon QuickSight. We walk through how you can quickly leverage QuickSight APIs to embed your own analytics and share a few tips and tricks to help you easily enhance your applications with rich, interactive data visualizations and analytics without any custom development.

      Speaker: Olivia Carline, QuickSight Solution Architect, AWS
      Duration: 30mins


      Deliver better customer experiences with machine learning in real-time (Level 300)
      Businesses are increasingly using machine learning (ML) to make near-real-time decisions, such as placing an ad, assigning a driver, recommending a product, detecting fraudulent transactions or even dynamically pricing products and services. Real-time machine learning can substantially enhance your customers' experience, resulting in better engagement and retention. In this session, learn how to use AWS data streaming platforms such as Amazon Kinesis to collect and process data in real-time, and Amazon SageMaker Feature Store, which provides a fully managed central repository for ML features, to support real-time machine learning. We will also walk through the architecture that supports the ML-backed decisions in near-real time for a credit card fraud detection use case.

      Speaker: Aneesh Chandra PN, Senior Analytics Specialist Solutions Architect, AWS
      Duration: 30mins


      Sentiment analysis using Amazon Aurora machine learning integration (Level 200)
      In this session, learn how to turn relational data into insights using Amazon Aurora and the integration with Machine learning. Uncover and learn how to take existing data in your relational database, for example product reviews, blog comments, and extract sentiments hidden in it. We walk through the Aurora's integration with Amazon Comprehend which uses Natural Learning processing (NLP) and Amazon SageMaker, and focus on using simple SQL queries to achieve this. And yes, no prior knowledge of machine learning needed!

      Speaker: Suman Debnath, Principal Developer Advocate, AISPL
      Duration: 30mins


      Building a document processing solution with Amazon Textract, database, data lakes, and analytics on AWS (Level 200)
      Organizations have many traditional documents and forms that hold critical business data. These documents, such as medical forms, insurance claims and loan applications, have structured and unstructured data that are either extracted by humans or by rule-based systems which are not easily scalable, have high costs, and could produce low-accuracy extraction results. In this session, learn how to use Amazon Textract to extract structured data to more accessible file types or to a queryable database, integrate a data lake and analytics solution, visualize the extraction results and deploy your automated document processing workflow into production, at scale.

      Speaker: Ryan Hendriks, Principal Solution Architect – Data Modernization, AWS
      Duration: 30mins


      Image and video analysis with Amazon Rekognition and Analytics on AWS (Level 200)
      Organisations are looking for ways to extract data from images and videos. Storing and manual cataloging of such media is expensive, error-prone, and hard to scale. Computer vision technology enables digital media professionals to generate valuable insights at a quick pace and at a lower cost by automatically identifying the contents of images and video. In this session, learn how you can integrate artificial intelligence into your media workflows with Amazon Rekognition, a deep learning-based image and video analysis service. We will also cover different use cases in applications such as security, public safety, identity verification, and media and entertainment.

      Speaker: Rohini Gaonkar, Senior Developer Advocate, India, AISPL
      Duration: 30mins

Tingkat sesi yang dirancang untuk Anda

PERKENALAN
Level 100

Sesi difokuskan pada menyediakan ikhtisar layanan dan fitur AWS, dengan asumsi bahwa peserta belum terlalu memahami topik ini.

MENENGAH
Level 200

Sesi difokuskan pada menyediakan praktik terbaik, detail fitur layanan dan demo dengan asumsi bahwa peserta memiliki pengetahuan awal mengenai topik.

LANJUTAN
Level 300

Sesi mempelajari lebih dalam topik yang dipilih. Pembicara mengasumsikan bahwa peserta sudah cukup memahami topik, tetapi mungkin atau mungkin tidak memiliki pengalaman langsung dalam menerapkan solusi serupa.

Pembicara unggulan

Rahul Pathak

Rahul Pathak
Vice President of Analytics, AWS

Olivier Klein

Olivier Klein,
Chief Technologist, APJ, AWS


Pelajari selengkapnya tentang Analitik di AWS

10,000

pelanggan menggunakan danau data and gudang data di AWS

Terdepan dalam Gartner Magic Quadrant untuk layanan pengembang AI cloud

450,000

basis data dimigrasikan ke AWS

3x

Lebih cepat dalam analisis berskala petabita daripada Apache Spark standar

89%

proyek deep learning di cloud dijalankan di AWS


FAQ

1. Di mana AWS Innovate di-hosting?
2. Bagi siapa AWS Innovate ditujukan?
3. Apakah ada sesi dalam bahasa lain?
4. Berapa biaya untuk mengikuti AWS Innovate?
5. Bisakah saya mendapatkan konfirmasi pendaftaran AWS Innovate?
6. Bagaimana cara menghubungi penyelenggara konferensi online ini?

T: Di mana AWS Innovate di-hosting?
J: 
AWS Innovate adalah konferensi online gratis. Setelah mengisi formulir pendaftaran, Anda akan menerima email untuk menyelesaikan pendaftaran. Ikuti petunjuknya dan selesaikan langkah-langkah untuk menerima email konfirmasi dan mendapatkan akses ke acara.

T: Bagi siapa AWS Innovate ditujukan?
J:
 Baik baru menggunakan AWS maupun pengguna yang berpengalaman, Anda dapat mempelajari hal baru di AWS Innovate. AWS Innovate dirancang untuk mengembangkan keterampilan yang tepat untuk membuat wawasan baru, memungkinkan efisiensi baru, dan membuat prediksi yang lebih akurat.

T: Apakah ada sesi dalam bahasa lain?
J:
 Seri online ini tersedia dalam Bahasa Indonesia, InggrisKorea, dan Jepang.

T: Berapa biaya untuk mengikuti AWS Innovate?
J:
 AWS Innovate adalah acara online gratis.

T: Bisakah saya mendapatkan konfirmasi pendaftaran AWS Innovate?
J: 
Setelah mengisi formulir pendaftaran, Anda akan menerima email untuk menyelesaikan pendaftaran. Ikuti petunjuknya dan selesaikan langkah-langkah untuk menerima email konfirmasi dan mendapatkan akses ke acara.

T: Bagaimana cara menghubungi penyelenggara konferensi online ini?
J:
 Jika ada pertanyaan yang belum terjawab dalam FAQ di atas, silakan email kami.

Mulai pelajari layanan AWS dengan AWS Free Tier

Daftar akun AWS untuk menikmati penawaran gratis untuk Amazon S3, Amazon RDS, Amazon SageMaker, dan lebih dari 100 layanan AWS.
Lihat Detail AWS Tingkat Gratis »

Rahul Pathak, Vice President of Analytics, AWS

Rahul Pathak adalah Wakil Presiden Analitik (Vice President of Analytics) di AWS dan bertanggung jawab atas Amazon Athena, Amazon Elasticsearch Service, EMR, Glue, Lake Formation, dan Redshift. Selama sembilan tahun di AWS, ia telah memfokuskan diri pada basis data terkelola, analitik, dan layanan basis data. Rahul berpengalaman lebih dari dua puluh tahun di industri ini dan turut mendirikan dua perusahaan, yang satu berfokus pada analitik media digital dan yang lain pada IP-geolocation. Ia meraih gelar dalam Ilmu Komputer dari MIT dan MBA Eksekutif dari University of Washington.

Olivier Klein, Chief Technologist, APJ, AWS

Olivier adalah seorang ahli teknologi yang senang terlibat dengan kustomer, berpengalaman lebih dari 10 tahun di industri dan pernah bekerja untuk AWS di APAC dan Eropa untuk membantu pelanggan membangun aplikasi yang tangguh, dengan skalabilitas yang tinggi, aman, dan hemat biaya, serta membuat model bisnis yang inovatif dan didukung oleh data. Ia berbagi bagaimana teknologi yang sedang berkembang dalam lingkup kecerdasan buatan, machine learning, dan IoT dapat membantu menciptakan produk baru, membuat proses yang sudah ada semakin efisien, menyediakan wawasan bisnis menyeluruh, dan memanfaatkan saluran keterlibatan baru untuk pelanggan. Ia juga secara aktif membantu pelanggan membangun plaftorm yang menyelaraskan infrastruktur IT, secara efektif meningkatkan efisiensi dan mengguncang proses pengembangan produk yang telah dijalankan selama beberapa dekade sebelumnya.