General

What is time-series data?

Time-series data is a sequence of data points—such as stock prices, temperature, and CPU use of an EC2 instance—recorded over time. With time-series data, each data point consists of a timestamp, one or more attributes, and the event that changes over time. This data is used to derive insights into the performance and health of an application, detect anomalies, and identify optimization opportunities.

For example, DevOps engineers might want to view data that measures changes in infrastructure performance metrics, manufacturers might want to track IoT sensor data that measures changes in temperature of an equipment across a facility, and online marketers might want to analyze clickstream data that captures how a user navigates a website over time. As time-series data is generated from multiple sources in extremely high volumes, it needs to be cost-effectively stored and analyzed in near real time to derive key business insights.

Which engines does Amazon Timestream support?

Amazon Timestream offers fully managed InfluxDB, one of the most popular open source time-series databases in the market, and LiveAnalytics, a serverless time-series database built for scale.

How do I get started with Timestream?

You can get started with Timestream using the AWS Management Console, AWS Command Line Interface (AWS CLI), or SDKs. For more information, including tutorials and other getting started content, see the developer guide.

When should I use Amazon Timestream for LiveAnalytics compared to Amazon Timestream for InfluxDB?

Amazon Timestream for InfluxDB should be used for use cases that require near real-time time-series queries and when you need InfluxDB features or open source APIs. The existing Timestream engine, Amazon Timestream for LiveAnalytics, should be used when you need to ingest more than tens of gigabytes of time-series data per minute and run SQL queries on terabytes of time-series data in seconds.

Can I use Timestream for InfluxDB and Timestream for LiveAnalytics together?

Yes. The two engines complement each other for low-latency and large-scale ingestion of time-series data. You can ingest data into Timestream for InfluxDB and use a Telegraf plugin to send data into Timestream for analysis of historical data through SQL queries.

Are there additional charges to use Timestream for LiveAnalytics with Timestream for InfluxDB?

If you decide to migrate your Timestream for InfluxDB data into Timestream for LiveAnalytics, you will incur public billing charges for using that service, including ingestion, storage, and querying. It is optional to use Timestream for LiveAnalytics with Timestream for InfluxDB.

How will this affect my existing workloads in Timestream for LiveAnalytics?

Timestream for InfluxDB can be used separately or with your Timestream for LiveAnalytics workloads. Timestream for InfluxDB is targeted for near real-time applications with single-digit millisecond response times. Timestream for LiveAnalytics addresses use cases that need to ingest gigabytes of data in minutes and query terabytes of data in seconds. You can combine Timestream for InfluxDB and Timestream for LiveAnalytics within your applications or dashboards.

Do I need to define a schema before sending data to Timestream?

No. Timestream dynamically creates a table’s schema based on a set of dimensional attributes and measures. This offers flexible and incremental schema definition that can be adjusted at any time without affecting availability.

How do I access my Timestream databases?

Once your databases have been created and become available, you can retrieve endpoint information from the Timestream console. Alternatively, you can also use the Describe API to retrieve endpoint information (DescribeDatabase when using Timestream for LiveAnalytics and DescribeDbInstances when using Timestream for InfluxDB).

How do I use Timestream with Grafana?

You can visualize your Timestream time-series data and create alerts using Grafana, a multiplatform, open source analytics and interactive visualization tool. To learn more and find sample applications, see the documentation.

How can I send data to Timestream using AWS Lambda?

You can create AWS Lambda functions that interact with Timestream. For more detailed information, see the documentation.

How can I send data to Timestream using open source Telegraf?

You can send time-series data collected using open source Telegraf directly into Timestream with the Telegraf connector. For more detailed information, see the documentation.

Can I use Timestream in an Amazon VPC?

You can access Timestream from your Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints. Amazon VPC endpoints are easy to configure and provide reliable connectivity to Timestream APIs without requiring an internet gateway or NAT instance.

Can I use Timestream through AWS CloudFormation?

AWS CloudFormation simplifies provisioning and management by providing CloudFormation templates for quick and reliable provisioning of the services or applications. CloudFormation provides comprehensive support for Timestream by providing templates to create databases (both for Timestream for LiveAnalytics and Timestream for InfluxDB). The templates are up to date with the latest Timestream for InfluxDB announcement and provide flexibility and ease of use to Timestream customers.

Timestream for LiveAnalytics

Overview

What is Amazon Timestream for LiveAnalytics?

Amazon Timestream for LiveAnalytics is a fast, scalable, and serverless time-series database built for large scale workloads. It is serverless and automatically scales up or down to adjust capacity and performance, so you don’t need to manage the underlying infrastructure. Its fully decoupled architecture allows you to ingest trillions of data points and run millions of queries per day.

The Timestream for LiveAnalytics adaptive query engine lets you access and analyze recent and historical data together, without having to specify its location. It has built-in time-series analytics functions, helping you identify trends and patterns in your data in near real time.

How does Timestream for LiveAnalytics work?

Timestream for LiveAnalytics is designed to collect, store, and process time-series data. Its serverless architecture supports fully decoupled data ingestion, storage, and query processing services that can scale independently, enabling virtually infinite scale for your application’s needs. Rather than predefining the schema at table creation time, a Timestream table’s schema is dynamically created based on the attributes of the incoming time-series data, allowing for flexible and incremental schema definition.

For data storage, Timestream for LiveAnalytics partitions the data by time and attributes, accelerating data access using a purpose-built index. Attributes of your data such as measure name or the chosen partitioning key play a critical role in effectively partitioning and performantly retrieving your data. In addition, Timestream for LiveAnalytics automates data lifecycle management by offering an in-memory store for recent data and magnetic store for historical data and by supporting configurable rules to automatically move data from the memory store to the magnetic store as it reaches a certain age.

Timestream for LiveAnalytics also simplifies data access through its purpose-built adaptive query engine that can seamlessly access and combine data across storage tiers without having to specify the data location, so you can quickly and easily derive insights from your data using SQL. Lastly, Timestream works seamlessly with your preferred data collection, visualization, analytics, and machine learning (ML) services, making it easy for you to include Timestream in your time-series solutions.

What availability does Timestream for LiveAnalytics have?

Timestream for LiveAnalytics provides 99.99% availability. For more information, refer to the service level agreement (SLA).

How am I billed for Timestream for LiveAnalytics?

With Timestream for LiveAnalytics, you pay only for what you use. You are billed separately for writes, data stored, and data scanned by queries. Timestream automatically scales your writes, storage, and query capacity based on usage. You can set the data retention policy for each table and choose to store data in an in-memory or magnetic store. For detailed pricing, see the pricing page.

Does Timestream for LiveAnalytics offer a free trial?

Yes, Timestream for LiveAnalytics offers a 1-month free trial for all new accounts. The free trial usage is capped at 50GB of ingestion, 100GB of magnetic storage, 750GB-Hours of memory storage, and 750GB of data scanned.

How am I billed if I exceed the free trial usage?

You are billed at standard Timestream for LiveAnalytics pricing for the usage beyond what the free trial provides. Additional details are on the pricing page.

In what AWS Regions is Timestream For LiveAnalytics available?

For current Region availability, see the pricing page.

Performance and scale

What performance can I expect from Timestream for Live Analytics?

Timestream for LiveAnalytics offers near real-time latencies for data ingestion. The Timestream for LiveAnalytics built-in memory store is optimized for rapid point-in-time queries, and the magnetic store is optimized to support fast analytical queries. With Timestream for LiveAnalytics, you can run queries that analyze tens of gigabytes of time-series data from the memory store within milliseconds and analytical queries that analyze terabytes of time-series data from the magnetic store within seconds. Scheduled queries further improve query performance by calculating and storing the aggregates, rollups, and other real-time analytics used to power frequently accessed operational dashboards, business reports, applications, and device monitoring systems.

You can store exabytes of data in a single table. As your data grows over time, Timestream for LiveAnalytics uses its distributed architecture and massive amounts of parallelism to process larger volumes of data while keeping query latencies almost unchanged.

How does Timestream for LiveAnalytics scale?

Timestream serverless architecture supports fully decoupled data ingestion, storage, and query processing systems that can scale independently. Timestream for LiveAnalytics continuously monitors your application requirements for ingestion, storage, and query rate to instantly scale without any downtime for the application.

What are the current limits and quotas for Timestream for LiveAnalytics?

For current limits and quotas, see the documentation.

Data Ingestion

How can I send data to Timestream for LiveAnalytics?

You can collect time-series data from connected devices, IT systems, and industrial equipment and write it into Timestream for LiveAnalytics. You can send data to Timestream for LiveAnalytics either directly from your application using the AWS SDKs or from data collection services such as AWS IoT Core, Amazon Managed Service for Apache Flink, or Telegraf. For more information, see the documentation.

How do I deal with late or future arrival data in Timestream for LiveAnalytics?

Late arrival data is data that has a timestamp in the past and is outside the retention boundary of the memory store. Future data is data that has a timestamp in the future. Timestream lets you store and access both kinds.

To store late arrival data, you simply write the data into Timestream for LiveAnalytics, and the service will automatically determine whether it gets written to the memory store or to the magnetic store based on the timestamp of the data and the configured data retention period for the memory and magnetic stores. To store data that is beyond 15 minutes into the future, model your data as a multi-measure record and represent the future timestamp as a measure within the record.

When should I use batch load?

Using batch load, you can ingest CSV files stored in Amazon Simple Storage Service (Amazon S3) into Timestream for LiveAnalytics. You can use batch load for backfilling data that is not immediately required for analysis. You can create batch load tasks by using the AWS Management Console, AWS CLI, and AWS SDKs. For more information, see the documentation.

How can I send data to Timestream for LiveAnalytics using AWS IoT Core?

You can collect data from your IoT devices and store that data in Timestream for LiveAnalytics using AWS IoT Core rule actions. For more detailed information, see the documentation.

How can I send data to Timestream for LiveAnalytics using Amazon Kinesis?

You can use Apache Flink to transfer your time-series data from Amazon Kinesis directly into Timestream for LiveAnalytics. For more detailed information, see the documentation.

How can I send data to Timestream for LiveAnalytics using Amazon MSK?

You can use Apache Flink to send your time-series data from Amazon Managed Streaming for Apache Kafka (Amazon MSK) directly into Timestream for LiveAnalytics. For more detailed information, see the documentation.

Data storage

How does Timestream for LiveAnalytics store data?

Timestream organizes and stores time-series data in partitions. The partitioning of data is determined by the service based on the attributes of the data. Attributes such as timestamp and measure_name or customer-defined partition keys play a key role in deciding the partitions. See Werner Vogels’ blog for more details. If you want to optimize your query performance to better fit your specific needs, we recommend using customer-defined partition keys. Using Timestream, you can automate data lifecycle management by simply configuring data retention policies to automatically move data from the memory store to the magnetic store as it reaches the configured age.

What are the benefits of Timestream for LiveAnalytics memory store?

Timestream for LiveAnalytics memory store is a write-optimized store that accepts and deduplicates the incoming time-series data. The memory store is also optimized for latency-sensitive point-in-time queries.

What are the benefits of Timestream for LiveAnalytics magnetic store?

Timestream for LiveAnalytics magnetic store is a read-optimized store built for running fast analytics queries that can scan hundreds of terabytes of data. The magnetic store is also optimized for fast analytical queries that scan hundreds of terabytes of data.

How does data lifecycle management work?

You can set the retention period for both memory store and magnetic store. The defaults are 12 hours and 10 years respectively. As the age of the data, determined by the timestamp in the record, exceeds the configured retention period of memory store, Timestream for LiveAnalytics automatically tiers the data into the magnetic store. Similarly, if the age of the data exceeds the configured magnetic store retention period, the service automatically deletes the data.

What is the durability of my data?

Timestream for LiveAnalytics ensures durability of your data by automatically replicating your memory and magnetic store data across different Availability Zones within a single Region. All of your data is written to disk before acknowledging your write request as complete.

Analytics and machine learning

How can I query data with Timestream for LiveAnalytics?

You can use SQL to query your time-series data stored in Timestream. You can also use time-series analytics functions for interpolation, regression, and smoothing using SQL. For more information, see the documentation. The Timestream adaptive query engine allows you to access the data across storage tiers using a single SQL statement. It transparently accesses and combines data across storage tiers without requiring you to specify the data location. 

Can I automatically roll up, aggregate, or preprocess my data with Timestream for LiveAnalytics?

Timestream for LiveAnalytics scheduled queries offer a fully managed, serverless, and scalable solution for calculating and storing aggregates, rollups, and other real-time analytics used to power frequently accessed operational dashboards, business reports, applications, and device monitoring systems.

With scheduled queries, you simply define the real-time analytics queries that calculate aggregates, rollups, and other real-time analytics on your incoming data, and Timestream periodically and automatically runs these queries and reliably writes the query results into a separate table. You can then point your dashboards, reports, applications, and monitoring systems to simply query the destination tables instead of querying the considerably larger source tables containing the incoming time-series data. This leads to performance and cost reductions by an order of magnitude because the destination tables contain much less data than the source tables, thereby offering faster and cheaper data access and storage.

Can I use JDBC and ODBC drivers with Timestream for LiveAnalytics?

You can use JDBC and ODBC drivers to connect Timestream for LiveAnalytics to your preferred business intelligence tools and other applications. See the JDBC and ODBC documentation for additional details.

What visualization, analytics, and ML tools can I use with Timestream for LiveAnalytics?

You can visualize and analyze time-series data in Timestream for LiveAnalytics using Amazon QuickSight and Grafana. You can also use QuickSight for your ML needs.

How do I use Timestream for LiveAnalytics with QuickSight?

You can create rich and interactive dashboards for your time-series data using QuickSight. For more information, see the documentation.

How do I use Timestream for LiveAnalytics with Amazon SageMaker?

You can use Amazon SageMaker notebooks to integrate your ML models with Timestream for LiveAnalytics. For more information, see the documentation.

Security and compliance

Does Timestream for LiveAnalytics support data encryption?

Data is always encrypted whether at rest or in transit. Timestream for LiveAnalytics allows you to specify an AWS Key Management Service (AWS KMS) customer managed key for encrypting data in the magnetic store.

What compliance certification readiness does Timestream for LiveAnalytics meet?

Timestream for LiveAnalytics is ISO (9001, 27001, 27017, and 27018), PCI DSS, FedRAMP (Moderate), and Health Information Trust (HITRUST) Alliance Common Security Framework (CSF) compliant. It is also HIPAA eligible and in scope for AWS SOC 1, SOC 2, and SOC 3 reports.

Data protection

What backup options are available for Timestream for LiveAnalytics?

You have two backup options available for your Timestream resources: on-demand backups and scheduled backups. On-demand backups are one-time backups that can be initiated either from the Timestream console or AWS Backup. On-demand backups are useful when you want to create a backup prior to making a change to your table that might require you to revert the changes. Scheduled backups are recurring backups that you can configure, using AWS Backup policies, at desired frequencies (such as 12 hours, 1 day, 1 week, and so on). Scheduled backups are useful when you want to create ongoing backups to meet your data protection goals.

Are Timestream for LiveAnalytics table backups full or incremental backups?

The first backup, either on demand or scheduled, of the table is a full backup and every subsequent backup of the same table is incremental, copying only the data that has changed since the last backup. 

How will I be charged and billed for my use of backup and restore capabilities?

Backup and restores are charged based on the backup storage size of the selected table, measured on a GB-Month basis. Charges will be shown under Backup in your AWS bill and include costs for backup storage, data transfers, restores, and early deletes. As the backups are incremental in nature, the storage size of the subsequent backup of the table is the size of the amount of the data changed since the last backup. Refer to AWS Backup pricing for additional details.

How do I get started protecting my Timestream for LiveAnalytics data?

To get started, you need to enable AWS Backup to protect your Timestream for LiveAnalytics resources (this is a one-time action). Once enabled, navigate to the AWS Management Console or use the AWS Backup CLI or SDK to create on-demand or scheduled backups of your data, and copy those backups across accounts and Regions. You can configure your backup lifecycle management based on your data protection needs. For more information, refer to the creating a backup documentation

How can I restore my Timestream for LiveAnalytics data?

You can restore your Timestream for LiveAnalytics tables through the AWS Management Console or using the AWS Backup CLI or SDK. Select the recovery point ID for the resource you want to restore, and provide the required inputs such as destination database name, new table name, and retention properties to start the restore process. Upon successful restore, you can access the data. When you attempt to restore the latest incremental backup of your table, the entire table data is restored. For more information, refer to the documentation.

Timestream for InfluxDB

Introduction

What is Amazon Timestream for InfluxDB?

Amazon Timestream for InfluxDB is a new time-series database engine that makes it easy for application developers and DevOps teams to run InfluxDB databases on AWS for real-time time-series applications using open source APIs. With Timestream for InfluxDB, it is easy to set up, operate, and scale time-series workloads that can answer queries with single-digit millisecond query responses.

What versions of InfluxDB are supported?

Timestream for InfluxDB supports the open source 2.7 version of InfluxDB. 

Why should I use Timestream for InfluxDB?

You should use Timestream for InfluxDB if you are self-managing InfluxDB, want to use open source time-series APIs, or are building real-time time-series applications that require single-digit millisecond query responses. With Timestream for InfluxDB, you get the benefit of open source APIs and a wide set of open source Telegraf agents to collect time-series data. You do not need to manage complex and time-consuming tasks, such as InfluxDB installation, upgrades, storage, replication for high availability, and backups.

What kind of SLAs can I expect from Timestream for InfluxDB?

Timestream for InfluxDB provides an SLA of 99.9% availability when deployed with a Multi-AZ configuration and 99.5% availability for a Single-AZ configuration.

What performance can I expect from Timestream for InfluxDB?

Timestream for InfluxDB is built for near real-time time-series use cases. Depending on the instance configurations and workloads characteristics, you can expect write to read latency of approximately 1 second with query latency of single-digit milliseconds.

How do I migrate workloads to Timestream for InfluxDB?

To migrate to Timestream for InfluxDB from a self-managed InfluxDB instance, you can simply restore a backup from an existing InfluxDB database into a Timestream for InfluxDB instance with a few minutes of downtime. You can reconfigure your data collection agents, such as open source Telegraf agents, to target the InfluxDB endpoint managed by Timestream for InfluxDB. Dashboarding technologies, such as InfluxDB UI, self-hosted Grafana, or Amazon Managed Grafana, will continue to work by configuring them to use the Timestream for InfluxDB endpoint without any other code changes.

To migrate from Timestream for LiveAnalytics to Timestream for InfluxDB, you can export your data from Timestream for LiveAnalytics to Amazon S3, make any required modifications to the CSV files exported, and load it into Timestream for InfluxDB.

Database instances

What is a database (DB) instance?

You can think of a DB instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB instances, define and refine infrastructure attributes of your DB instances, and control access and security through the AWS Management ConsoleTimestream for InfluxDB APIs, and AWS CLI. You can run one or more DB instances and each DB instance can support one or more databases (buckets) or organizations, depending on the workload characteristics and instance configuration.

How do I create a DB instance?

DB instances are simple to create using either the AWS Management Console, Amazon Timestream for InfluxDB APIs, or AWS CLI. To launch a DB instance using the AWS Management Console, choose InfluxDB Databases and then select the Create InfluxDB Database button on the dashboard. From there, you can specify the parameters for your DB instance, including instance type, storage type and amount, primary user credentials, and more.

Alternatively, you can create your DB instance using the CreateDBInstance API or create-db-instance command.

How do I access my running DB instance?

Once your DB instance is available, you can retrieve its endpoint through the DB instance description in the AWS Management Console, GetDBInstance API, or get-db-instance command. Using this endpoint plus your access token, you can use InfluxDB APIs to send writes and read requests as well as manage the engine using your favorite database tool or programming language. You can also access the InfluxDB UI from your browser using that same endpoint. In order to allow network requests to your running DB instance, you will need to authorize access or enable public IP access.

How many DB instances can I run with Timestream for InfluxDB?

By default, you are allowed to have up to a total of 40 Timestream for InfluxDB instances.

Billing

How will I be charged and billed for my use of Timestream for InfluxDB?

You pay only for what you use and there are no minimum or setup fees. You are billed based on the following:

  • DB instance hours: Based on the class (for example, db.influx.large and db.influx.4xlarge) of the DB instance consumed. Partial DB instance hours consumed are billed in 1-second increments with a 10-minute minimum charge following a billable status change, such as creating, starting, or modifying the DB instance class.
  • Storage (per GB-Month): Storage capacity you have provisioned to your DB instance. If you scale your provisioned storage capacity within the month, your bill will be prorated.
  • Data transfer: Internet data transfer in and out of your DB instance.

For Timestream for InfluxDB pricing information, visit the pricing page.

When does billing of my Timestream for InfluxDB DB instances begin and end?

Billing commences for a DB instance as soon as the DB instance is available. Billing continues until the DB instance stops, which would occur upon deletion or in the event of an instance failure.

What defines billable Timestream for InfluxDB instance hours?

DB instance hours are billed for each hour your DB instance is running in an available state. If you no longer wish to be charged for your DB instance, you must stop or delete it to avoid being billed for additional instance hours. Partial DB instance hours consumed are billed in 1-second increments with a 10-minute minimum charge following a billable status change, such as creating, starting, or modifying the DB instance class.

How will I be billed for a stopped DB instance?

While your database instance is stopped, you are charged for provisioned storage but not for DB instance hours.

How will I be billed for Multi-AZ DB instance deployments?

If you specify that your DB instance should be a Multi-AZ deployment, you will be billed according to the Multi-AZ pricing posted on the Timestream for InfluxDB pricing page. Multi-AZ billing is based on the following:

  • Multi-AZ DB instance hours: Based on the class (for example, db.influx.large and db.influx.4xlarge) of the DB instance consumed. As with standard deployments in a single Availability Zone, partial DB instance hours consumed are billed in 1-second increments with a 10-minute minimum charge following a billable status change, such as creating, starting, or modifying the DB instance class. If you convert your DB instance deployment between standard and Multi-AZ within a given hour, you will be charged both applicable rates for that hour.
  • Provisioned storage (for Multi-AZ DB instance): If you convert your deployment between standard and Multi-AZ within a given hour, you will be charged the higher of the applicable storage rates for that hour.
  • Data transfer: You are not charged for the data transfer incurred in replicating data between your primary and standby. Internet data transfer in and out of your DB instance is charged the same as with a standard deployment.

Do your prices include taxes?

Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and sales tax. For customers with a Japanese billing address, the use of AWS services is subject to Japanese Consumption Tax. 

Hardware

How do I determine which initial DB instance class and storage capacity are appropriate for my needs?

In order to select your initial DB instance class and storage capacity, you will want to assess your application’s compute, memory, and storage needs. For information about the DB instance classes available, refer to the Timestream for InfluxDB User Guide.

What is Timestream for InfluxDB IOPS Included Storage?

Timestream for InfluxDB IOPS Included Storage is an SSD-backed storage option designed to deliver fast, predictable, and consistent I/O performance. With Timestream for InfluxDB IOPS Included Storage, you have three tiers to choose, ranging from small workloads to large-scale, high-performance optimized ones. You only specify the volume size allocated for the tier that better fits your needs. Timestream for InfluxDB IOPS Included Storage is optimized for I/O-intensive, transactional (OLTP) database workloads. For more details, see the Timestream for InfluxDB User Guide.

How do I choose among the Timestream for InfluxDB storage types?

Choose the storage type that’s most suited for your workload.

Maximum amount of series Writes (data points per second) Queries per second Instance type Storage tier
<100K ~50K ~5 db.influx.large InfluxDB I/O included 3K
<1MM ~150K <25 db.influx.2xlarge InfluxDB I/O included 3K
~1MM ~200K ~25 db.influx.4xlarge InfluxDB I/O included 12K
<10MM >250K ~35 db.influx.4xlarge InfluxDB I/O included 12K
~10MM ~500K ~50 db.influx.8xlarge InfluxDB I/O included 12K
~10MM <750K <100 db.influx.12xlarge InfluxDB I/O included 12K

Database configuration

How do I choose the right configuration parameters for my DB instances?

By default, Timestream for InfluxDB chooses the optimal configuration parameters for your DB instance, taking into account the instance class and storage capacity. However, if you want to change them, you can do so using the AWS Management Console, Timestream for InfluxDB APIs, or AWS CLI. Note that changing configuration parameters from recommended values can have unintended effects, ranging from degraded performance to system crashes, and should only be attempted by advanced users who wish to assume these risks.

At launch, we will provide a limited set of parameters that will be able to be modified by you, and these include: flux-log-enabled, log-level, metrics-disable , no-tasks, query-concurrency, query-queue-size, and tracing-type. This list might grow over time based on customer requirements.

What are DB parameter groups? How are they helpful?

A DB parameter group acts as a container for engine configuration values that can be applied to one or more DB Instances. If you create a DB instance without specifying a DB parameter group, a default DB parameter group is used. This default group contains engine defaults and Timestream for InfluxDB system defaults optimized for the DB instance that you’re running.

However, if you want your DB instance to run with your custom-specified engine configuration values, you can simply create a new DB parameter group, modify the desired parameters, and modify the DB instance to use the new DB parameter group. 

Multi-AZ deployments

What does it mean to run a DB instance as a Multi-AZ deployment?

When you create or modify your DB instance to run as a Multi-AZ deployment, Timestream for InfluxDB automatically provisions and maintains a synchronous standby replica in a different Availability Zone. Updates to your DB instance are synchronously replicated across Availability Zones to the standby in order to keep both in sync and protect your latest database updates against DB instance failure. 

During certain types of planned maintenance, or in the unlikely event of DB instance failure or Availability Zone failure, Timestream for InfluxDB will automatically failover to the standby so that you can resume database writes and reads as soon as the standby is promoted. Since the name record for your DB instance remains the same, your application can resume database operation without the need for manual administrative intervention.

With Multi-AZ deployments, replication is transparent. You do not interact directly with the standby, and it cannot be used to serve read traffic.

What is an Availability Zone?

Availability Zones are distinct locations within a Region that are engineered to be isolated from failures in other Availability Zones. Each Availability Zone runs on its own physically distinct, independent infrastructure and is engineered to be highly reliable. Common points of failures such as generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados, or flooding would only affect a single Availability Zone. Availability Zones within the same Region benefit from low-latency network connectivity.

What do “primary” and “standby” mean in the context of a Multi-AZ deployment?

When you run a DB instance as a Multi-AZ deployment, the primary serves database writes and reads. In addition, Timestream for InfluxDB provisions and maintains a standby behind the scenes, which is an up-to-date replica of the primary. The standby is promoted in failover scenarios. After failover, the standby becomes the primary and accepts your database operations. You do not interact directly with the standby (for example, for read operations) at any point prior to promotion.

What are the benefits of a Multi-AZ deployment?

The chief benefits of running your DB instance as a Multi-AZ deployment are enhanced database durability and availability. The increased availability and fault tolerance offered by Multi-AZ deployments make them a natural fit for production environments.

Running your DB instance as a Multi-AZ deployment safeguards your data in the unlikely event of a DB instance component failure or loss of availability in one Availability Zone.

For example, if a storage volume on your primary fails, Timestream for InfluxDB automatically initiates a failover to the standby, where all of your database updates are intact. This provides additional data durability relative to standard deployments in a single Availability Zone where a user-initiated restore operation would be required and updates that occurred after the latest restorable time (typically within the last 5 minutes) would not be available.

You also benefit from enhanced database availability when running your DB instance as a Multi-AZ deployment. If an Availability Zone failure or DB instance failure occurs, your availability impact is limited to the time automatic failover takes to complete. The availability benefits of Multi-AZ also extend to planned maintenance. Another implied benefit of running your DB instance as a Multi-AZ deployment is that DB instance failover is automatic and requires no administration. 

Are there any performance implications of running my DB instance as a Multi-AZ deployment?

You might observe elevated latencies relative to a standard DB instance deployment in a Single Availability Zone as a result of the synchronous data replication performed on your behalf.

When running my DB instance as a Multi-AZ deployment, can I use the standby for read or write operations?

No, a Multi-AZ standby cannot serve read requests. Multi-AZ deployments are designed to provide enhanced database availability and durability, rather than read scaling benefits. As such, the feature uses synchronous replication between primary and standby. Our implementation makes sure the primary and the standby are constantly in sync, but precludes using the standby for read or write operations.

How do I set up a Multi-AZ DB instance deployment?

In order to create a Multi-AZ DB instance deployment, simply choose the Create a Standby Instance option for Multi-AZ Deployment when launching a DB instance with the AWS Management Console. Alternatively, if you are using the Timestream for InfluxDB APIs, you would call the CreateDBInstance API and set the Multi-AZ parameter to the True value.

At this time, you will not be able to convert an existing Single-AZ Timestream for InfluxDB DB instance to Multi-AZ. The only way to achieve this would be to create a new Multi-AZ DB instance and migrate your workload to it.

What events would cause Timestream for InfluxDB to initiate a failover to the standby replica?

Timestream for InfluxDB detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Timestream for InfluxDB automatically performs a failover in the event of any of the following:

  • Loss of availability in primary Availability Zone
  • Loss of network connectivity to primary
  • Compute unit failure on primary
  • Storage failure on primary

Note: Timestream for InfluxDB Multi-AZ deployments do not fail over automatically in response to database operations, such as long running queries, deadlocks, or database corruption errors.

What happens during Multi-AZ failover and how long does it take?

Failover is automatically handled by Timestream for InfluxDB so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Timestream for InfluxDB simply flips the canonical name (CNAME) record for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer.

Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within a couple of minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered, the size of the index, and other factors; the use of adequately large instance types is recommended with Multi-AZ for best results. AWS also recommends the use of Timestream for InfluxDB IOPS Included Storage with Multi-AZ instances for fast, predictable, and consistent throughput performance.

Can I initiate a forced failover for my Multi-AZ DB instance deployment?

Timestream for InfluxDB will automatically failover without user intervention under a variety of failure conditions. At this time, you cannot manually initiate a forced failover of your Timestream for InfluxDB DB instance.

How do I control and configure Multi-AZ synchronous replication?

With Multi-AZ deployments, you simply set the Multi-AZ parameter to True. The creation of the standby, synchronous replication, and failover are all handled automatically. This means you cannot select the Availability Zone your standby is deployed in or alter the number of standbys available (Timestream for InfluxDB provisions one dedicated standby per DB instance primary). The standby also cannot be configured to accept database read activity.

Will my standby be in the same Region as my primary?

Yes, your standby is automatically provisioned in a different Availability Zone of the same Region as your DB instance primary.

Can I see which Availability Zone my primary is currently located in?

Yes, you can gain visibility into the location of the current primary by using the AWS Management Console or GetDBInstance API.

After failover, my primary is now located in a different Availability Zone than my other AWS resources (such as EC2 instances). Should I be concerned about latency?

Availability Zones are engineered to provide low-latency network connectivity to other Availability Zones in the same Region. In addition, you might want to consider architecting your application and other AWS resources with redundancy across multiple Availability Zones so your application will be resilient in the event of an Availability Zone failure. Multi-AZ deployments address this need for the database tier without administration on your part.

Learn more about pricing

Review pricing information for Amazon Neptune.

Learn more 
Sign up for a free account

Instantly get access to the AWS Free Tier. 

Sign up 
Start building on the console

Get started building with Amazon Neptune on the AWS Management Console.

Sign in