Amazon Timestream features


Amazon Timestream databases make it easy to work with time-series data in the AWS Cloud. Timestream is fully managed, so it frees up your time by removing time-consuming database infrastructure tasks such as installation, upgrades, storage, replication for high availability, and manual backups. Timestream offers a choice of two databases: Amazon Timestream for LiveAnalytics and Amazon Timestream for InfluxDB

With Timestream for LiveAnalytics, you can ingest more than tens of gigabytes of time-series data per minute and run SQL queries on terabytes of time-series data in seconds. It has built-in time-series analytics functions, helping you identify trends and patterns in near real time. Timestream for LiveAnalytics defines time series as a native data type and supports advanced aggregates, window functions, and complex data types such as arrays and rows. Timestream for LiveAnalytics provides up to 99.99% availability. Ideal use cases for Timestream for LiveAnalytics include security analytics and quality monitoring of video streaming. 

With Timestream for InfluxDB, you can easily run open source InfluxDB databases on AWS for time-series applications with millisecond response times. Timestream for InfluxDB provides up to 99.9% availability, and you can choose a Multi-AZ deployment option to automatically detect failures and failover to a different AZ. Ideal use cases for Timestream for InfluxDB include real-time alerting and monitoring infrastructure reliability.

Timestream Features Overview Image

Performance and scalability

Timestream for LiveAnalytics is serverless, meaning it automatically scales up or down to adjust capacity and performance so you don’t need to manage the underlying infrastructure or provision capacity. Timestream for LiveAnalytics can process millions of queries. It features a fully decoupled architecture where data ingestion, storage, and query can scale independently, allowing it to offer virtually infinite scale for an application’s needs. 

Timestream for LiveAnalytics simplifies your data lifecycle management with a memory store for recent data and a magnetic store for historical data. The memory store is optimized for fast point-in-time queries, and the magnetic store is optimized for fast analytic queries. With Timestream for LiveAnalytics, you don’t need to configure, monitor, and manage a complex data archival process. You can simply configure data retention policies to automatically move data from the memory store to the magnetic store and to delete it from the magnetic store when it reaches a certain age.

Timestream for InfluxDB is a fully managed service, making it easy to run InfluxDB databases on AWS for real-time time-series applications using open source APIs. It provides single-digit millisecond response times for real-time monitoring and alarming use cases as well as the ability to run complex analytics over petabytes of data in seconds. Timestream for InfluxDB has a high-throughput data store and query engine to meet these needs. It also allows you to optimize your performance and cost by automating all your data cleaning and aggregation tasks with specialized built-in tools and features.


All data in Timestream is automatically encrypted by default, so you don’t need to manually encrypt data at rest or in transit. 

Timestream for LiveAnalytics offers native integrations for IAM and AWS KMS services, so you can securely manage access to your resources and data, including specifying an AWS KMS customer managed key for encrypting data in the magnetic store. Timestream for LiveAnalytics also allows you to protect your time-series data, through integration with AWS Backup, to help you meet your compliance and business continuity needs. 

Using this fully managed functionality, you can create immutable backups, automate backup lifecycle management, and copy those backups across AWS accounts and Regions. In addition, you can schedule periodic backups of your data to meet your regulatory needs. The first backup of your table is a full backup, and subsequent backups of the same table are incremental, only copying the changes since the last backup, making it flexible and cost-effective to protect your data. 

You can create different backup plans for the Timestream for LiveAnalytics tables in your account, allowing you to protect each resource based on your specific regulatory and business continuity needs. You can also set retention policies that will automatically retain, expire, and transition backups to cold storage, minimizing backup storage costs. Additionally, you can restore the entire table to a database in a few steps, simplifying data recovery. 

Timestream for InfluxDB offers integration with AWS Secrets Manager, so you can rotate, manage, and retrieve database credentials, API keys, and other secrets through their lifecycle.

Integrations with AWS services

Timestream for LiveAnalytics integrates with commonly used services for importing and exporting data, boosting your application with machine learning (ML), or visualizing your data. You can send data to Timestream using AWS IoT Core, Amazon Kinesis, Amazon MSK, and open source Telegraf connectors. You can use Amazon SageMaker with Timestream for ML. You can also visualize data using Amazon QuickSight, Grafana, and business intelligence tools through JDBC


With Timestream for LiveAnalytics, you can store and analyze trillions of events per day up to 1,000 times faster and at as little as one-tenth the cost of relational databases. Its adaptive query engine allows you to access data across storage tiers using a single SQL statement. It transparently accesses and combines data across storage tiers without requiring you to specify the data location. Its query engine lets you access and analyze recent and historical data together with a single query. 


Timestream for LiveAnalytics scheduled queries offer a fully managed, serverless, and scalable solution for calculating and storing aggregates, rollups, and other real-time analytics used to power frequently accessed operational dashboards, business reports, applications, and device monitoring systems. With scheduled queries, you simply define the queries that calculate aggregates, rollups, and other real-time analytics on your incoming data.

Timestream for LiveAnalytics periodically and automatically runs these queries and reliably writes the results into a configurable destination table. You can then point your dashboards, reports, applications, and monitoring systems to simply query the destination tables instead of querying the considerably larger source tables containing the incoming time-series data. This leads to increased performance while reducing cost by an order of magnitude. 

The destination tables contain much less data than the source tables, thereby offering faster and less expensive data access and storage. Given that destination tables contain much less data than source tables, you can store data in the destination tables for a much longer duration at a fraction of the storage cost of the source table. You can also choose to reduce the data retention period of your source tables to lower costs. Scheduled queries can, therefore, make time-series analytics faster, more cost-effective, and more accessible to many more customers, so you can continue to make better data-driven business decisions.

Timestream for LiveAnalytics provides a one-month complimentary trial with up to 50 GB of ingestion, 100 GB of magnetic tier storage, 750 GB of memory tier storage, and 750 GB of data scanned. 

Developer productivity

You can access Timestream using the AWS SDKs. Timestream supports two SDKs per language. Supported languages include: Java, Java v2, Go, Python, Node.js, .NET.

Open source APIs
Timestream for InfluxDB is fully compatible with the InfluxDB open source APIs and allows you to easily integrate with Telegraf open source plugin-driven server agents and its hundreds of specialized plugins for collecting, processing, and reporting metrics. InfluxDB has one of the strongest community support systems for time series, offering a wealth of resources, shared knowledge, and regular updates ensuring continuous improvements and reliability for its users.