With Amazon Managed Streaming for Apache Kafka (MSK), you pay only for what you use. There are no minimum fees or upfront commitments. You do not pay for Apache ZooKeeper nodes that Amazon MSK provisions for you, or for data transfer that occurs between brokers or between Apache ZooKeeper nodes and brokers within your clusters. Amazon MSK’s pricing is based on the type of resource you create. There are two types of clusters: MSK clusters and MSK Serverless clusters. With MSK clusters you can specify and then scale cluster capacity to meet your needs. With MSK Serverless clusters, you don't need to specify or scale cluster capacity. You can also create Kafka Connect connectors using MSK Connect. See the various tabs below for detailed pricing and their examples.
You can also enable private connectivity (powered by AWS PrivateLink) if you need to connect your Kafka clients in one or more VPCs to a MSK cluster in a different VPC. With this feature, you pay an hourly rate for each cluster and authentication scheme that has private connectivity turned on. An authentication scheme is used by customers to authenticate the client requests to the MSK cluster. Additionally, you also pay per GB of data processed through private connectivity. You will pay standard AWS PrivateLink charges for the for the Amazon MSK Managed VPC connections used by your Apache Kafka clients to connect privately to the cluster.
You pay an hourly rate for Apache Kafka broker instance usage (billed at one-second resolution), with varying fees depending on the size of the broker instance and active brokers in your Amazon MSK clusters. See the Broker Instance Pricing Tables for details.
You also pay for the amount of storage you provision in your cluster. This is calculated by adding up the GB provisioned per hour and dividing by the total number of hours in the month, resulting in a "GB-months" value, as shown in the pricing example. See the Broker Storage Pricing Tables for details. You also have the option to provision additional storage throughput independently, charged by the amount you provision in MB/s per month. This is calculated by adding up MB/s provisioned per hour per broker and dividing by the total number of hours in the month, resulting in a “MB/s-months” value, as shown in (optional) Provisioned Storage Throughput Example.
You are not charged for data transfer between brokers or between Apache ZooKeeper nodes and brokers. You will pay standard AWS data transfer charges for data transferred in and out of Amazon MSK clusters.
If two kafka.t3.smalls are active in the US East (N. Virginia) AWS Region, and your brokers use a total of 50 GB of storage* for 31 days in March, you would pay the following for the month:
Total charge = (broker instance charge) + (storage charge)
If three kafka.m5.larges are active in the US East (N. Virginia) AWS Region, and your brokers use 1 TB of storage for 15 days in March and 2 TB of storage for the final 16 days in March, you would pay the following for the month:
Total charge = (broker instance charge) + (storage charge)
Let’s assume you have three kafka.m5.large instances active in the US East (N. Virginia) AWS Region. You want to retain data for a total of 30 days with 1-days data in the primary storage. You are ingesting 2MB/s of data into your cluster. You want to provision 1TB of primary storage for your real-time processing and store the last 30-days’ worth of data in the low-cost tier. You also want to read all the data stored in the low-cost tier with one application.
Total charges = (broker instance charge) + (primary storage charge) + (low-cost tier charge) + (low-cost tier retrieval charges)
(optional) Provisioned Storage Throughput example
If you choose to turn on provisioned storage throughput and provision 300 MB/s of storage throughput for 31 days in your Amazon MSK cluster with 3 brokers in the US East (N. Virginia) AWS region, you would pay the following additional charge on top of the broker instance and storage charges as shown in above examples:
Total charge = (broker instance charge) + (storage charge) + (provisioned storage throughput charge)
(Optional) Multi-VPC private connectivity example
If you have producers and consumers in different VPCs or AWS accounts than your Amazon MSK cluster with 3 brokers in the US East (N.Virginia) AWS region and you ingest 2MB/s of data and have 2 consumers reading all the data, you can choose to turn on multi-VPC private connectivity to enable cross-VPC connectivity. You would pay the following additional charge on top of the broker instance and storage charges as shown in above examples:
Total MSK charges = (broker instance charge) + (primary storage charge) + (multi-VPC private connectivity charges).
With MSK Serverless, you pay an hourly rate for your serverless clusters and an hourly rate for each partition that you create. Additionally, you pay per GB of data that your producers write to and your consumers read from the topics in your cluster. Amazon MSK charges you only for the storage you consume.
You will pay standard AWS data transfer charges for data transferred to or from another region and for data transferred out to the public internet.
Let’s assume you create a MSK Serverless cluster in US East (Ohio) AWS region. The cluster has 5 topics with 20 partitions each. Daily, your producers write on average 100GB of data and your consumers read 200GB of data. You also retain that data for 24 hours to ensure it is available for replay. In the above scenario, you would pay the following for a 31-day month:
Total = per-hour cluster charge + per-hour partition charge + data-in charges + data-out charges + storage charges
You pay an hourly rate for connector usage (billed at one-second resolution), with varying fees depending on the number of workers you use for your connector and the size of each worker, measured in number of MSK Connect Units (MCUs). Each MCU provides 1 vCPU of compute and 4 GB of memory. See the pricing table for details.
Let’s say you use Amazon MSK Connect to stream data from a topic in your Amazon MSK cluster to an Amazon Simple Storage Service (S3) bucket in the US East (N. Virginia) AWS Region, and your connector is configured as follows:
Autoscale two to four workers, with each worker using 1 MCU. During the work day (eight hours), the connector scales out to four workers, and after the work day is over (16 hours), it scales down to two workers.
In this case, you would pay the following for the month:
Total charge = Kafka Connect worker charge