Amazon SES Blog

Amazon SES Best Practices: Top 5 Best Practices for List Management

If you are an Amazon SES customer, you probably know that in addition to managing your email campaigns, you need to be mindful of your reputation as an email sender.

Maintaining a good reputation as a sender is vital if you rely on email delivery as part of your business – if you have a good reputation, your emails have the best chance of arriving in your recipients’ inboxes. If your reputation is questionable, your messages could get dropped or banished to the spam folder. Recipient ISPs may also decide to throttle your email, preventing you from delivering emails to your recipients on time.

This blog post provides five best practices to help you keep your email-sending reputation and deliverability high by focusing on the source of most deliverability problems: list acquisition and management.

Without further ado, here are our Top 5 Best Practices for List Management:

1. Use confirmed opt-in (a.k.a. double opt-in or the gold standard).

The principle behind this is simple – when a user enters an email address on your website, you need to verify that the address is legitimate before you add it to the mailing list you use for your regular campaigns. To this end, you send a verification email to the address and ask the subscriber to click a link in the email, which will then enable the account. By clicking on this link, the email address owner is confirming that they are willing to receive the email notifications they signed up for on your website. The benefits of this practice are evident:

  • You will not send to an email address more than once (or a few times, if the customer requests a second verification email). If the address is fake (or a typo) and the email is sent to someone who doesn’t want to hear from you, then you are less likely to get a complaint from this person because they will only get one email.
  • Since your actual mail campaigns are only going to addresses you have verified, then you know that you are making good use of your resources and that your campaigns are actually appreciated by the recipients.

2. Process bounces and complaints.

SES provides feedback on bounces and complaints through SNS (or email) to make it easy for you to be alerted of addresses that bounce or recipients who complain. If you get a hard bounce or a complaint, you should remove that email address from your list. You should also identify the root cause of the bounces and complaints. For example, say that you notice that your bounce rate for new subscriptions is rising. This could be an indicator that people are signing up for your service using fake email addresses. While it is not unusual for someone to sign up using a fake email address, you need to make sure that you are not encouraging your customers to do so. One way in which you could be encouraging customers to do this is by giving away free stuff without asking for a confirmed opt-in. If you are in this situation, you need to change the incentive that drives customers to sign up using fake addresses: either remove the gifts or implement confirmed opt-in (there is a reason we call this the gold standard J).

3. Remove non-engagers.

You need to operate under the assumption that if a customer is not opening or clicking your email, then they are not interested in what you’re sending. Define a timeframe that makes sense to your business, and if a recipient doesn’t interact with your mail within that timeframe, stop emailing them. This tactic is a great complement to double opt-in and should be standard for any email sender. Regardless of whether a customer originally opted in through double opt-in or just a regular signup, an email address can go stale and become a spamtrap.  Spamtraps are silent reputation traps, which means that you will get no indication that you are hitting them – removing non-engagers is the only way to avoid them. They are used by many organizations to measure a sender’s reputation, and particularly how well the sender is measuring engagement. If you continue to email spamtraps, your mail could end up in the spam folder, your domain could be blacklisted, and SES could suspend your service.

4. Make it easy for your recipients to unsubscribe.

If you are sending bulk email (as opposed to mail that is the result of a transaction), then you need to make it easy for customers to opt out of the mail. Include an easy-to-spot opt-out link in every bulk email, and use the list-unsubscribe header for easy integration with ISPs who support it. If a customer does not want the mail, you should not send it to them. Sending email to an unwilling recipient will do more harm than good. In many locations, including the US, Canada, and much of Europe and Asia, enabling recipients to easily opt out of your email is a legal requirement.

5. Keep your mailing lists independent.

If you operate more than one website, you should never mix your subscriber lists. Customers who sign up for website A should never (under any circumstance) receive an email from website B, unless they sign up for that one too. The reason is simple: These customers have only agreed to receive email from website A. Furthermore, if your customers get mail from a website unknown to them, they are likely to mark that mail as spam, thus hurting your email reputation.

Never forget: Your email campaigns are only as good as their ability to reach your customers, and following best practices can be the difference between a delivered and a dropped email. While the above best practices should help you, list management is only a part of the equation – the quality of your content also plays a big role in your ability to deliver email. Nevertheless, we hope our recommendations in this post will prove useful in your email endeavors.

If you have questions, feel free to let us know via the SES Forums or in the comment section of this blog.

Receiving Email with Amazon SES

The Amazon SES team is pleased to announce that you can now use SES to receive email!

For the past four years, SES has strived to make your life easier by maintaining a fleet of SMTP servers ready to send mail when you want it. There’s no need to worry about scaling, ensuring message delivery, or navigating relationships with countless email service providers.

However, you’d still need to manage a fleet of SMTP servers if you wanted to receive mail. As with sending mail, receiving mail comes with its own set of headaches: scaling for traffic spikes, blocking malicious senders, filtering out spam and viruses, and ultimately routing mail to your application, to name a few.

As of today, the SES team would like to invite you to say goodbye to these hassles, and rely on SES to simply receive your mail just as you rely on us to simply send your mail.

Why should I use SES to receive mail?

SES is ideally suited for servicing mail that is programmatically actionable. The following are a handful of common use cases that you can now leverage SES to solve:

  • Automatically create support tickets from customer email.
  • Implement an email auto-responder.
  • Process email list unsubscribe requests.
  • Process email bounces and complaints.
  • Create an email archival solution.
  • Update correspondence in tickets, forums, etc. by email.
  • Receive files from customers via email.

You can also use SES to manage your organization’s entire mail stream, directing mail destined to personal inboxes to Amazon WorkMail and processing customer service mail, etc. programmatically with SES.

How does it work?

Think of SES as an email gateway to the AWS ecosystem. After onboarding your domain onto SES, we will receive mail on your behalf, and allow you to consume it through a variety of different AWS services. For example, you can configure SES to deliver all of your mail to an Amazon S3 bucket, and process it directly using AWS Lambda.

SES empowers you to make decisions about how your mail is processed through the concept of a rule set. Every account that receives mail using SES has a single active rule set that you customize to dictate to SES what you’d like done with your mail across all of your SES-managed domains.

A rule set is simply an ordered list of rules, and a rule is a combination of a matching condition and an ordered list of actions. A condition is something like “All mail to” or “All mail to and all subdomains.” Actions are things like “Encrypt my mail using my AWS KMS key, write it to my S3 bucket, and notify me of the delivery via Amazon SNS” or “Asynchronously execute my Lambda function that updates my mailing list based on unsubscribe emails” or “Send me a SNS notification containing the email.” A more thorough discussion of rule sets, rules, and actions can be found in our developer guide.

Your rule set is sequentially evaluated for every message SES receives, and only the actions that apply to the message are executed. This enables you to write rules that route mail differently based on individual message characteristics. You can have a rule that drops mail that SES flags as spam across all of your domains, another that writes mail to to one S3 bucket, another that writes mail to to a different bucket and then executes a Lambda function but only when the email contains a specific header value, and so on.

Amazon SES rule set

The system was designed to be both highly customizable and convenient to use. Our goal is to minimize the amount of custom email routing or parsing logic that your application needs to do, and, if you capitalize on our Lambda integration, you may not even need an application at all!

How do I get started?

The best place to start is the SES developer guide. It provides detailed instructions on how to onboard a domain onto SES to receive mail, as well as walks you through the process of setting up rules to govern your mail flows. Then, head over to the SES console to set up your domains to begin receiving mail!

Finally, if you’re heading to AWS re:Invent this year, be sure to check out our presentation showcasing our new features!

Announcing Sending Authorization!

The Amazon SES team is excited to announce the release of sending authorization! This feature allows users to grant permission to use their email addresses or domains to other accounts or IAM users.

Note that for simplicity, we’ll be referring to email addresses and domains collectively as “identities” and the accounts and IAM users receiving permissions as “delegate senders.”

Why should I use sending authorization?

The primary incentive to use sending authorization is to enable cross-account identity usage with fine-grained permission control. Let’s look at two example use cases.

Say you’ve just been hired to create and manage an email marketing campaign for an online retailer. Until now, in order to send the retailer’s marketing emails under their domain name, you would have had to convince them to allow you to verify their domain under your own AWS account—this would let you send emails using any address under their domain, at any time, and for any purpose, which the retailer might not be comfortable with. You’d also have to work out who would get the bounce/complaint/delivery notifications, which might be additionally confusing because the notifications from your marketing emails would be sent to the same place as the notifications from the transactional emails the retailer is handling.

With sending authorization, however, you can use the retailer’s identity and receive delivery, bounce and complaint notifications while letting them retain sole ownership of it. Identity owners will still be able to monitor usage with delivery, bounce, and complaint notifications and can adjust permissions at any time, and use AWS condition keys to finely control the scope of those permissions.

Imagine instead that you own or administrate for a company that has several disparate teams that all wish to use SES to send emails using a common email address. Until now, you would have had to create and maintain IAM users for each of these teams under the same account (in which case they still would have access to each other’s identities) or verify the same identity under multiple different accounts.

With sending authorization, you can verify the common identity under the single account (perhaps yours) and simply grant the other teams permission to use it. If you still prefer the IAM policy route, you can take advantage of the new condition keys released with sending authorization to tighten up the IAM policies.

Sending authorization is designed to be powerful and flexible. In fact, Amazon WorkMail uses sending authorization to provide an enterprise-level email and calendaring service built on SES.

How does sending authorization work?

Identity owners grant permissions by creating authorization policies. Let’s look at an example. The policy below gives account 9999-9999-9999 permission to use the domain owned by 8888-8888-8888 in SendEmail and SendRawEmail requests as long as the “From” address is (with any address tags).

  "Id": "SampleAuthorizationPolicy",	
  "Version": "2012-10-17",
  "Statement": [
      "Sid": "AuthorizeMarketer",
      "Effect": "Allow",
      "Resource": "arn:aws:ses:us-east-1:888888888888:identity/",
      "Principal": {"AWS": ["999999999999"]},
      "Action": ["SES:SendEmail", "SES:SendRawEmail"],
      "Condition": {
        "StringLike": {
          "ses:FromAddress": "marketing+.*"

You could write this policy yourself, or you could use the Policy Generator in the SES console, which is even easier. Your Policy Generator page would look like:

Policy generator

Identity owners can add or create a policy for an identity using the PutIdentityPolicy API or the SES console, and can have up to 20 different policies for each identity. You can read more about how to construct and use policies in our developer guide.

How do I make a call with someone else’s identity that I have permission to use?

You’ll specify to SES that you’re using someone else’s identity by presenting an ARN when you make a request. The ARN below refers to an example domain identity ( owned by account 9999-9999-9999 in the US West (Oregon) AWS region.


Depending on how you make your call, you may need to provide up to three different ARNs: a “source” identity ARN, a “from” identity ARN, and a “return-path” identity ARN. The SendEmail and SendRawEmail APIs have new optional parameters for this purpose, but users of our SMTP endpoint or our SendRawEmail API have the option to instead provide the ARNs as X-headers (X-SES-Source-ARN, X-SES-From-ARN, and X-SES-Return-Path-ARN). See our SendEmail and SendRawEmail documentation for more information about these identities). These headers will be removed by SES before your email is sent.

What happens to notifications when email is sent by a delegate?

Both the identity owner and the delegate sender can set their own bounce, complaint, and delivery notification preferences. SES respects both sets of preferences independently. As a delegate sender, you can configure your notification settings almost as you would if you were the identity owner. The two key differences are that you use ARNs in place of identities, and cannot configure feedback forwarding (a.k.a. receiving bounces and complaints via email) in the console or the API. But, this doesn’t mean that delegate senders cannot use feedback forwarding. If you are a delegate sender and you do want bounces and complaints to be forwarded to an email address you own, just set the “return-path” address of your emails to an identity that you own. You can read more about it in our developer guide.

Billing, sending limits, and reputation

Cross-account emails count against the delegate’s sending limits, so the delegate is responsible for applying for any sending limit increases they might need. Similarly, delegated emails get charged to the delegate’s account, and any bounces and complaints count against the delegate’s reputation.

Sending authorization and IAM policies

It’s important to distinguish between SES sending authorization policies and IAM policies. Although the policies look similar at first glance, sending authorization policies dictate who is allowed to use an SES identity, and IAM policies (set using AWS Identity Access and Management) control what IAM users are allowed to do. The two are independent. Therefore, it’s entirely possible for an IAM user to be unable to use an identity despite having authorization from the owner because the user’s IAM policies do not give permission to use SES (and vice versa). Keep in mind, however, that by default, IAM users with SES access are allowed to use any identities owned by their parent account unless a sending authorization policy explicitly dictates otherwise.

On a related note, with the release of sending authorization, we’re externalizing several new condition keys that you can use in your sending authorization and/or IAM policies:

  • ses:Recipients
  • ses:FromAddress
  • ses:FromDisplayName
  • ses:FeedbackAddress

These can be used to control when policies apply. For example, you might use the “ses:FromAddress” condition key to write an IAM policy that only permits an IAM user to call SES using a certain “From” address. For more information about how to use our new condition keys, see our developer guide.

We hope you find this feature useful! If you have any questions or comments, let us know in the SES Forum or here in the comment section of the blog.

SES Limit Increase Form Consolidation

Hi SES Senders,

We on the SES team strive to make things easier for our customers. As such, we’ve recently streamlined SES’s limit increase request process. Instead of having separate Support Center forms for Production Access and Sending Limit increases, we now have one form that serves both purposes. Our motivation behind the change was not just to consolidate the forms. The concept of “Production Access” was a little confusing, so we took this opportunity to deprecate that terminology.

Backing up a bit: With respect to SES, your AWS account can be in one of two states: in the sandbox, or out of the sandbox. “In the sandbox” means two things: 1) You can only send to verified email addresses and domains, and 2) Your sending limits are set to their starting values, which is a sending quota of 200 emails per 24-hour period, and a maximum send rate of 1 email per second. None of that has changed.

What’s changed is the process of getting out of the sandbox. Previously, to get out of the sandbox, you needed to submit an SES Production Access limit increase request via Support Center. A granted request would raise your sending limits and also remove the limitation on “To” addresses. If you later found that you needed higher sending limits, you’d submit a separate form in Support Center called an SES Sending Quota request.

Now, there is just one form: SES Sending Limits. With your first sending limit increase, you’ll automatically be moved out of the sandbox. That is, in addition to increasing your sending limits, you’ll be able to send to any “To” address.

The new form is very similar to the old SES Sending Quota form. It’s still in Support Center. You can click this link to go directly to the new form. However, if you ever need to get to it from the Support Center console, click the Create Case button. On the Create Case form, choose Service Limit Increase and then select SES Sending Limits from the drop-down menu as shown below.

Create SES sending limit increase case

Once you get to the form, you then choose the AWS region. Remember that sending limits apply to each region separately, and that also applies to being in or out of the sandbox. As always, your AWS account starts out in the sandbox in all three regions in which SES is available. Then, for example, if you want to get out of the sandbox in US East (N. Virginia) and EU (Ireland), you’ll need to submit a separate request for both regions. You can do this all on the same form by clicking the Add another request button after your previous request.

Below the region, you’ll select the limit type. SES has two sending limits: daily sending quota (the number of emails you can send in a 24-hour period) and maximum send rate (the maximum number of emails that SES can accept from your account per second, though note that the actual rate at which SES accepts your messages might be less than the maximum send rate). You can choose only one of the two limits in the drop-down menu, although if you really need to request increases in both, you can use the Add another request button to submit a request for the other limit. However, customers are most often interested in raising their daily sending quota.

SES sending limit increase

Once you choose the limit type, a field will appear for you to enter the amount. Be sure to only request the amount you really need. Keep in mind that you are not guaranteed to receive the amount you request, and the higher the limit you request, the more justification you’ll need to be considered for that amount.

The next fields are your mail type and your website URL. Although the website URL isn’t required, we highly recommend that you provide one if you have it, because it helps us evaluate your request.

The next three fields are to make sure that your sending is a good fit for our platform:

Finally, there is the Use Case field. This is where you should explain your situation in as much detail as possible; for example, describe the type of emails you are sending and how email-sending fits into your business. The more information you can provide that indicates that you are sending high-quality emails to recipients who want and expect it, the more likely we are to approve your request. The higher the jump you are requesting from your existing quota, the more detail you should provide.

SES Sending Limit cases are generally processed within one business day, but plan ahead and don’t wait until your situation is critical to submit the request.

We hope that consolidating our form and deprecating the “Production Access” terminology makes the SES limit increase process more straight-forward. If you have questions or comments, feel free let us know on the SES forum.

SES and Haskell

by Nolan Sandberg | on | Permalink | Comments |  Share


Amazon SES and most of the other AWS services have SDKs for languages like Java, .NET, PHP, Python, and Ruby. Most SDKs are just wrappers around the HTTP APIs that the services provide. If your favorite language isn’t supported by an AWS SDK, you can write your own client or use third-party APIs to call SES. In this post we are going to look at implementing a very minimal SES client in Haskell, a purely functional programming language.

I’ll cover some basic housekeeping before moving forward with the tutorial. In order to best follow this tutorial, you will need to have an intermediate level of understanding of Haskell and a basic understanding of SES. Haskell has several string-like types and this can create some confusion for beginners. String and Text are for textual data while ByteString is usually for binary data. Since the cryptographic functions provided in the cryptohash library work on ByteStrings, we will use ByteString for essentially everything in our example with the exception of error messages and parsing SES responses. We will also use the http-conduit library to make the HTTP requests to AWS and the GHC extension OverloadedStrings so we can type ByteStrings as string literals. This blog will be in the form of a literate haskell file where the code and explanations are together. First we will show the imports of libraries that we will be using.

{-# LANGUAGE OverloadedStrings #-}

module Main where

import Control.Applicative
import Control.Monad
import Crypto.Hash (Digest, SHA256, hmac, hmacGetDigest, hash, digestToHexByteString)
import Data.Byteable (toBytes)
import Data.ByteString (ByteString)
import qualified Data.ByteString.Base16 as B16 (encode)
import qualified Data.ByteString.Char8 as C
import Data.CaseInsensitive (original)
import Data.Char (toLower)
import Data.Text (Text, unpack)
import Data.Time (getCurrentTime)
import Data.Time.Format (formatTime, FormatTime)
import Data.Time.Clock (UTCTime)
import Data.List (intersperse, lines, sortBy)
import Data.Monoid ((<>))
import Network.HTTP.Conduit
import Network.HTTP.Types
import Network.HTTP.Types.Header
import Network.HTTP.Types.Method
import System.Locale (defaultTimeLocale)
import System.Environment (getEnv)

import Blaze.ByteString.Builder (toByteString)
import Data.Aeson

Signing AWS Requests

The first and arguably most difficult part of calling SES via HTTP is signing requests for authentication. We are going to implement the latest version of the request signing process which is documented here.

First, we need a function that can create a canonical request from the raw HTTP request. This essentially comes down to appending several parameters separated by newlines.

canonicalRequest :: Request -> ByteString -> ByteString
canonicalRequest req body = C.concat $
    intersperse "n"
        [ method req
        , path req
        , queryString req
        , canonicalHeaders req
        , signedHeaders req
        , hexHash body
hexHash :: ByteString -> ByteString
hexHash p = digestToHexByteString (hash p :: Digest SHA256)

The canonical headers parameter is created by separating each header name and header value by a colon and then separating each of those strings with a new line. In http-conduit, the host header is not included in the requestHeaders field of the Request record so we need to add that manually.

headers :: Request -> [Header]
headers req = sortBy ((a,_) (b,_) -> compare a b) (("host", host req) : requestHeaders req)

canonicalHeaders :: Request -> ByteString
canonicalHeaders req =
    C.concat $ map ((hn,hv) -> bsToLower (original hn) <> ":" <> hv <> "n") hs
  where hs = headers req

bsToLower :: ByteString -> ByteString
bsToLower = toLower

The signed headers parameter is a list of lowercase header names separated by semicolons.

signedHeaders :: Request -> ByteString
signedHeaders req =
    C.concat . intersperse ";"  $ map ((hn,_) -> bsToLower (original hn)) hs
  where hs = headers req

Now we have to create the derived key. The HMAC algorithm takes a key and plaintext and returns a fixed length string. We create the derived key by repeatedly using the HMAC algorithm to hash a value and then using the returned value as the key for the subsequent hash. The starting key is the user’s AWS secret access key prepended by the string “AWS4”. The values that will be hashed are the date, region, service, and finally, the string “aws4_request”.

v4DerivedKey :: ByteString -> -- ^ AWS Secret Access Key
                ByteString -> -- ^ Date in YYYYMMDD format
                ByteString -> -- ^ AWS region
                ByteString -> -- ^ AWS service
v4DerivedKey secretAccessKey date region service = hmacSHA256 kService "aws4_request"
  where kDate = hmacSHA256 ("AWS4" <> secretAccessKey) date
        kRegion = hmacSHA256 kDate region
        kService = hmacSHA256 kRegion service

hmacSHA256 :: ByteString -> ByteString -> ByteString
hmacSHA256 key p = toBytes $ (hmacGetDigest $ hmac key p :: Digest SHA256)

Next, we create the string to sign. This string contains the information we have computed up to this point along with the AWS region, the AWS service name, and the current time. For SES, the service name is “ses”.

stringToSign :: UTCTime    -> -- ^ current time
                ByteString -> -- ^ The AWS region
                ByteString -> -- ^ The AWS service
                ByteString -> -- ^ Hashed canonical request
stringToSign date region service hashConReq = C.concat
    [ "AWS4-HMAC-SHA256n"
    , C.pack (formatAmzDate date) , "n"
    , C.pack (format date) , "/"
    , region , "/"
    , service
    , "/aws4_requestn"
    , hashConReq

format :: UTCTime -> String
format = formatTime defaultTimeLocale "%Y%m%d"

formatAmzDate :: UTCTime -> String
formatAmzDate = formatTime defaultTimeLocale "%Y%m%dT%H%M%SZ"

Finally, we create the signature by combining the canonical request, the string to sign, and the derived key. Although the Request type has a field for the request body, we explicitly pass the body of the request around because we only want to deal with strict ByteStrings for simplicity.

createSignature ::  Request         -> -- ^ Http request
                    ByteString      -> -- ^ Body of the request
                    UTCTime         -> -- ^ Current time
                    ByteString      -> -- ^ Secret Access Key
                    ByteString      -> -- ^ AWS region
createSignature req body now key region = v4Signature dKey toSign
  where canReqHash = hexHash $ canonicalRequest req body
        toSign = stringToSign now region "ses" canReqHash
        dKey = v4DerivedKey key (C.pack $ format now) region "ses"

v4Signature :: ByteString -> ByteString -> ByteString
v4Signature derivedKey payLoad = B16.encode $ hmacSHA256 derivedKey payLoad

With the version 4 signing implemented, we can move on to implementing the SendEmail call.

SES SendEmailCall

SES is currently available in three AWS regions: us-east-1, us-west-2, and eu-west-1. The HTTP endpoints for SES can be found here.

We define a simple record that carries all of the information required to call the SendEmail API.

data SendEmailRequest = SendEmailRequest
    { region            :: ByteString
    , accessKeyId       :: ByteString
    , secretAccessKey   :: ByteString
    , source            :: ByteString
    , to                :: [ByteString]
    , subject           :: ByteString
    , body              :: ByteString
    } deriving Show

usEast1 :: ByteString
usEast1 = "us-east-1"

usWest2 :: ByteString
usWest2 = "us-west-2"

euWest1 :: ByteString
euWest1 = "eu-west-1"

The http-conduit library parses the URL and then we configure it further based on the parameters of the SendEmailRequest. By setting the accept header to “text/json”, SES will return a response in JSON which we can then parse with the aeson library. The x-amz-date header is required to make AWS requests. Finally, we sign the request and add the authentication header before sending the request to SES.

sendEmail :: SendEmailRequest -> IO (Either String SendEmailResponse)
sendEmail sendReq = do
    fReq <- parseUrl $ "https://email." ++ C.unpack (region sendReq) ++ ""
    now <- getCurrentTime
    let req = fReq
                { requestHeaders =
                    [ ("Accept", "text/json")
                    , ("Content-Type", "application/x-www-form-urlencoded")
                    , ("x-amz-date", C.pack $ formatAmzDate now)
                , method = "POST"
                , requestBody = RequestBodyBS reqBody
        sig = createSignature req reqBody now (secretAccessKey sendReq) (region sendReq)
    resp <- withManager (httpLbs (authenticateRequest sendReq now req reqBody))
    case responseStatus resp of
        (Status 200 _) -> return $ eitherDecode (responseBody resp)
        (Status code msg) ->
            return $ Left ("Request failed with status code <" ++
                show code ++ "> and message <" ++ C.unpack msg ++ ">")
    reqBody = renderSimpleQuery False $
                    [ ("Action", "SendEmail")
                    , ("Source", source sendReq)
                    ] ++ toAddressQuery (to sendReq) ++
                    [ ("Message.Subject.Data", subject sendReq)
                    , ("Message.Body.Text.Data", body sendReq)

authenticateRequest :: SendEmailRequest -> UTCTime -> Request -> ByteString -> Request
authenticateRequest sendReq now req body =
    req { requestHeaders =
            authHeader now (accessKeyId sendReq)
                           (signedHeaders req) sig
                           (region sendReq) :
                           (requestHeaders req)
  where sig = createSignature req body now (secretAccessKey sendReq) (region sendReq)
toAddressQuery :: [ByteString] -> SimpleQuery
toAddressQuery tos =
    zipWith (index address ->
                ( "Destination.ToAddresses.member." <>
                    C.pack (show index)
                , address)
            ) [1..] tos

authHeader ::   UTCTime     -> -- ^ Current time
                ByteString  -> -- ^ Secret access key
                ByteString  -> -- ^ Signed headers
                ByteString  -> -- ^ Signature
                ByteString  -> -- ^ AWS Region
authHeader now sId signHeads sig region =
    ( "Authorization"
    , C.concat
        [ "AWS4-HMAC-SHA256 Credential="
        , sId , "/"
        , C.pack (format now) , "/" 
        , region
        , "/ses/aws4_request, SignedHeaders="
        , signHeads
        , ", Signature="
        , sig

SES provides a response to successful requests with the request ID and message ID. To handle SES responses, we will use the Either type and follow a common practice in Haskell, which is to use the Left side for information about a failure and the Right side for the result of a success. If we receive any response code other than HTTP success code 200, we will return an error in the left side of the Either type. For successful calls, we will use aeson to decode the request. We have created a data structure to hold the request and message IDs and a FromJSON instance to decode the JSON.

data SendEmailResponse = SendEmailResponse
    { requestId     :: Text
    , messageId     :: Text
    } deriving Show

instance FromJSON SendEmailResponse where
    parseJSON (Object o) = do
        response <- o .: "SendEmailResponse"
        reqId <- response .: "ResponseMetadata" >>= (.: "RequestId")
        msgId <- response .: "SendEmailResult" >>= (.: "MessageId")
        return $ SendEmailResponse reqId msgId
    parseJSON _ = mzero

To make an actual call to SES we use our AWS access key ID and secret access key to construct the appropriate SendEmailRequest. We then pass that request to the sendEmail function.

main :: IO ()
main = do
    awsId <- C.pack <$> getEnv "AWS_ACCESS_KEY_ID"
    awsSecret <- C.pack <$> getEnv "AWS_SECRET_ACCESS_KEY"
    let sendRequest = SendEmailRequest usEast1 awsId awsSecret ""
                        [""] "Sent from Haskell"
                        "This email was sent through SES using Haskell!"
    response <- sendEmail sendRequest
    case response of
        Left err -> putStrLn $ "Failed to send : " ++ err
        Right resp ->
            putStrLn $ "Successfully sent with message ID : " ++ unpack (messageId resp)

Now we have a minimal working example to call SES from Haskell! Happy sending!

SPF and Amazon SES

by Adrian Hamciuc | on | Permalink | Comments |  Share

Update (3/14/16): To increase your SPF authentication options, Amazon SES now enables you to use your own MAIL FROM domain. For more information, see Authenticating Email with SPF in Amazon SES.

One of the most important aspects of email communication today is making sure the right person sent you the right message. There are several standards in place to address various aspects of securing email sending; one of the most commonly known is SPF (the short form of Sender Policy Framework). In this blog post we explain what SPF is, how it works, and how Amazon SES handles it. We also address the most common questions we see from customers with regard to their concerns around email security.

What is SPF?

Described in RFC 7208 ( ), SPF is an open standard designed to prevent sender address forgery. In particular, it is used to confirm that the IP address from which an email originates is allowed to send emails by the owner of the domain that sent the email. What does that mean and how does it happen?

Before going into how SPF works, we should clarify exactly what it does and does not do. First, let’s separate the actual email message body and its headers from the SMTP protocol used to send it. SPF works by authenticating the IP address that originated the SMTP connection to the domain used in the SMTP MAIL-FROM and/or the HELO/EHLO command. The From header, which is part of the email message itself, is not covered by SPF validation. A separate standard, DomainKeys Identified Mail (DKIM), is used to authenticate the message body and headers against the From header domain (which can be different from the domain used in the SMTP MAIL-FROM command).

Now that we’ve talked about what SPF does, let’s look at how it actually works. SPF involves publishing a DNS record in the domain that wants to allow IP addresses to send from it. This DNS record needs to contain either blocks of IP addresses that are permitted to send from it, or another domain to which authorization is delegated (or both). When an ISP receives an email and wants to validate that the IP address that sent the mail is allowed to send it on behalf of the sending domain, the ISP performs a DNS query against the SPF record. If such a record exists and contains the IP address in question or delegates to a domain that contains it, then we know that the IP address is authorized to send emails from that domain.

SPF and Amazon SES

If you are using Amazon SES to send from your domain, you need to know that the current SES implementation involves sending emails from an SES-owned MAIL-FROM domain. This means that you do not need to make any changes to your DNS records in order for your emails to pass SPF authentication.

Common concerns

There are a couple of questions we frequently hear from customers with regard to SPF authorization and how it relates to Amazon SES usage.

The first concern seems to be how your sending is affected if SPF authentication is performed against Amazon SES and not against your own domain.

If you’re wondering whether any other SES customer can send on your behalf, the answer is no. SES does not allow sending from a specific domain or email address until that domain or email address has been successfully verified with SES, a process that cannot take place without the consent of the domain/address’s owner.

The next question is whether you can still have a mechanism under your control that can authenticate the email-related content that you do control (things such as the message body, or various headers such as the From header, subject, or destinations). The answer is yes — Amazon SES offers DKIM signing capabilities (or the possibility to roll out your own). DKIM is another open standard that can authenticate the integrity of an email message, including its content and headers, and can prove to ISPs that your domain (not Amazon’s or someone else’s) takes responsibility and claims ownership of that specific email.

Another concern you may have is how much flexibility you get in using SPF to elicit a specific ISP response for unauthenticated or unauthorized emails from your domain. In particular, this concern translates into configuring DMARC (short for Domain-based Message Authentication, Reporting & Conformance) to work with SES. DMARC is a standard way of telling ISPs how to handle unauthenticated emails, and it’s based on both a) Successful SPF and/or DKIM authentication and b) Domain alignment (all authenticated domains must match). As explained above, your MAIL-FROM domain is currently an SES domain, which doesn’t match your sending domain (From header). As a result, SPF authentication will be misaligned for DMARC purposes. DKIM, on the other hand, provides the necessary domain alignment and ultimately satisfies DMARC because you are authenticating your From domain.

Simply put, you must enable DKIM signing for your verified domain in order to be able to successfully configure DMARC for your domain.

 If you have any questions, comments or other concerns related to SPF and your Amazon SES sending, don’t hesitate to jump on the Amazon SES Forum and let us know. Thank you for choosing SES and happy sending!

Bounces To Domains You Have Verified

by Samuel Minter | on | Permalink | Comments |  Share
Hello SES senders!  We have talked a number of times about how high bounce rates indicate a need to improve sending practices.  A high bounce rate can be a sign that someone is sending mail to lists that they have bought or rented, or that they aren’t maintaining their own lists, or a number of other problems.  As such, SES takes high bounce rates seriously, and requires all SES senders to keep their bounce rates low.
Over time though, we have noticed one type of bounce that, while still an indication that something is wrong, is usually not an indication that the user is doing anything that would cause a problem with ISPs. The type of bounce we’re referring to happens when a user sends some sort of notification to an invalid address at their own domain.
Sending notifications to an address at your own domain is certainly not a bad practice in itself. For example, you might use system notifications to alert you that there has been an error or something else that needs attention.  This works fine until one of the people receiving notifications leaves the company and their mail starts bouncing.  If significant sending to that address compared to your overall sending, your bounce rate may spike, despite your good intentions.
Another situation is people testing their system by sending mail to made-up addresses at their own domain.  This practice also causes bounce rates to skyrocket.
None of these bounces are something you want, of course.  In the first case, you should have set up bounce processing that would notice the new bounces and allow you to take action to change where the alerts go, or suppress them all together.  Otherwise, the notices aren’t serving their intended purpose anyway and are just wasting your resources. In the second case, you should be using the  SES mailbox simulator rather than sending to made-up addresses, because it is never good to intentionally send to addresses that don’t exist.
While it would be better if people didn’t bounce messages sent to their own domains, we recognize that this is a situation where SES can allow some flexibility.
Because of this, as of today, while bounces to domains you have  verified with us will still be sent to you as bounces and show up in your bounce metrics on the console and with GetSendStatistics, they will no longer "count" when we look at your bounce rate to determine if you have a bounce problem.  You should still fix the underlying issue, but we will not notify you of a problem if your bounce rate is only high because of messages you are sending to domains you have verified with us.
Note that we can’t let you off the hook for bounces to email addresses you have  individually verified though, because we need evidence that you in fact control the domain, and the domain owner won’t be upset with the volume of bounces hitting it.  If you do own the domain but have only used email address verification so far, consider verifying the entire domain.
As always, we encourage your feedback in the  SES forum. Thank you for using SES!

Announcing Delivery Notifications!

The Amazon SES team is excited to announce the release of delivery notifications. This feature allows you to receive an SNS notification each time SES successfully delivers one of your emails to a recipient’s mail server. The notifications contain delivery information that provides increased transparency into the status of email sent with SES.

What are delivery notifications?

Delivery notifications are JSON objects that contain metadata about an email and the mail server that accepted it. Here’s an example:

  "notificationType": "Delivery",
  "mail": {
    "timestamp": "2014-05-28T22:40:59.638Z",
    "messageId": "0000014644fe5ef6-9a483358-9170-4cb4-a269-f5dcdf415321-000000",
    "source": "",
    "destination": [
  "delivery": {
    "timestamp": "2014-05-28T22:41:01.184Z",
    "recipients": [
    "processingTimeMillis": 826,
    "reportingMTA": "",
    "smtpResponse": "250 ok:  Message 64111812 accepted"

Within the notification, the top-level structure is in the same format as bounce and complaint notifications. However, unique to delivery notifications is the delivery object that contains information specific to the delivery, as described next.

Why do I want them?

Delivery notifications allow you to more accurately track your sending by alerting you the moment that an email is handed off from SES to a receiving mail server. In addition to a delivery timestamp, the notifications provide you with the response message of the remote mail server that accepted your email, the amount of time SES took to process your mail, and more. For details on each field present in the notification along with additional examples, see the AWS developer guide.

How do I use them?

If you’re familiar with configuring bounce or complaint notifications, the process is similar for delivery notifications:

  1. Create an SNS topic you’d like to use to receive delivery notifications. Alternatively, you may use an existing SNS topic.
  2. Use the SES console to edit the notification settings for an identity and select your SNS topic of choice from the “Deliveries” drop-down menu.
    You may also use the SetIdentityNotificationTopic API if you’d prefer to adjust the settings programmatically.
  3. Subscribe your application to the previously specified topic using one of the methods supported by SNS.

Important details

Delivery notifications are published as soon as SES delivers an email to a recipient’s mail server. In most cases, email sent through SES is delivered within seconds, but occasionally it may take considerably longer. For more information about this, please see the previous SES blog entry on the subject: Three places where your email could get delayed when sending through SES.

As with bounce and complaint notifications, an email with multiple recipients may result in more than one delivery notification. See the AWS developer guide for additional information about notifications with multiple recipients.

Furthermore, receiving a delivery notification is not an indication that the email’s intended recipient received the mail in their inbox, or even that they received it in their junk or spam folder. After accepting an email from SES, email providers have complete control over how they process and display the email to their customers. To help ensure your emails reach your recipients’ inboxes, we recommend that you follow the best practices outlined in the Amazon SES whitepaper.

We hope that you enjoy this feature! As always, please leave any comments or questions you may have in the SES Forum or here in the comments section of the blog.

Debugging SMTP Conversations Part 3: Analyzing TCP Packets

by Elton Pinto | on | in How To | Permalink | Comments |  Share

We’ve finally reached the conclusion of our deep dive into how you can capture SMTP conversations should you need to debug an issue that lies deeper than your application. Now that we’ve gone over SMTP conversation basics and getting the easiest to decipher bits of a TCP conversation with TCP Flow, let’s look at all the information contained in a TCP conversation using TCP Dump and Wireshark.

Using TCP Dump

TCP Dump is an open source network packet analyzer (licensed under a 3-clause BSD license) which, in conjunction with the libpcap library, can also be used for capturing network traffic. It is one of the most widely used packet analyzers around because it provides a raw level of detail that solutions like TCP Flow don’t provide. It’s essentially a fire hose of data, so it’s sometimes used to capture data that is then read in using Wireshark, which is licensed under GNU GPL v2 and provides you with a great GUI for filtering and analyzing packets.

Amazon EC2 instances running an Amazon Linux AMI come with TCP Dump (tcpdump) pre-installed, so you don’t need to do anything there. The TCP Dump manual is even more intimidating than the TCP Flow manual, so here’s a simple base command you can start and experiment with:

sudo tcpdump -i any -w ~/captures/capture_%Y-%m-%d-%H-%M-%S.cap -G 30 -n -X -Z $USER “port 25”

  • The -i option specifies what network interface to listen on, just as in TCP Flow. For most folks, “any” is going to work just fine.
  • The -w option writes the raw packets to the file instead of printing to the console, and it’s followed by the file path and format. You can specify the time in plain old strftime format – in this example a file would look like ~/captures/capture_2014-04-30-19-15-00.cap for the time 2014-04-30T19:15:00Z if your machine’s time zone is UTC.
  • The -G option is very useful if your application processes large amounts of data – it lets you specify, in seconds, how often the dump file is rotated. In this case, it’ll create a new capture file every 30 seconds (and the file naming will follow what you specified in the -w option).
  • The -n option will forego printing FQDNs of host names referenced in the dump. If your application logs print IP addresses instead of host names, this option will make your life much easier.
  • The -X option will print each packet in hex and ASCII, which may come in handy in Wireshark.
  • The -Z option drops privileges to the user name specified, which means that you’ll own the captures instead of root owning them.
  • The end of the command line is, once again, a filtering expression as defined in the pcap filter manual.

A TCP Dump and Wireshark Example: START TLS

Unlike TCP Flow output, your TCP Dump capture file(s) will probably be very hard to read. This is where Wireshark comes in handy. Wireshark actually comes with the command-line tool tshark, which you could use instead of TCP Dump (it’s built on top of TCP Dump), but it doesn’t provide a lot of added value for the general use case. If your own computer is Linux, you should be able to just install Wireshark with yum:

sudo yum -y install wireshark wireshark-gnome

There are Windows and OS X installers available from the Wireshark website, which also has detailed documentation on the suite of features that you can take advantage of. There’s a lot of documentation there, so before you browse that it’s not a bad idea to play with the program a bit to get your feet wet.

Once you have Wireshark installed, transfer your TCP Dump capture from your EC2 instance to your own computer, fire up Wireshark, and open your TCP Dump capture. On Linux, you can simply pass the capture file to Wireshark as a command-line argument (you may or may not need sudo privileges to run it):

sudo wireshark ~/capture_2014-04-16-23-52-29.cap

I recommend immediately going to “View” -> “Time Display Format” and then changing the date format from epoch time (the default) to something more readable. You’ll see a table in the center pane of the GUI that displays one row per packet of data, followed by deeper details of a selected packet, followed by a pane with a hex and ASCII view of the packet. Above all this, you’ll see a blank field that you can fill in with a filter expression, of which Wireshark has an impressive array. You can explore them with the “+ Expression” button (you should see some familiar filters under the “TCP” section from the TCP Flow filter expressions) and then choose one to slice your traffic to just what you’re interested in. There are also some default filter expressions to choose from in “Analyze” -> “Display Filters”. Let’s once again take a look at a STARTTLS conversation with Amazon SES:

TCP Dump / Wireshark

There are a few items of note here:

  • Everything is color coded as ingress (light blue) or egress (black). You also see grey for the ACK packet. Speaking of which…
  • Notice that you can see where the connection was established with the SMTP server – the first three packets. The first packet is the SYN packet from the SMTP client to the SMTP server to open a TCP connection. The second packet is the SYN ACK from the server to the client that it received the SYN packet. The third packet is the ACK from the client to the server that it received the SYN ACK and the connection is established. This is especially useful if you’re trying to determine if there are high latencies during connection establishment or between when the connection is established and receiving the SMTP greeting (the fourth packet) or latencies between all that and when your client starts sending an EHLO, etc.
  • In the pane below this packet table, you can select slices of the packet to highlight in the last pane where the packet is displayed.
  • Just as in the TCP Flow output, everything under “Ready to start TLS” is unreadable. Again, if all you care about is the timing of packets or if you’re a relay that receives email in plaintext and then transmits the messages to Amazon SES, what we’ve discussed up to now will work just fine for your use case. You can still add logging in your application to print out the decrypted packets.
  • The above screenshot shows just one conversation, but depending on your TCP Dump filtering expression, you can capture everything that’s going on to try and detect congestion or interference from other network traffic that your machine is dealing with.

A TCP Dump and Wireshark Example: TLS Wrapper

TLS wrapper mode encrypts everything right from the get-go, so your ability to peek into what’s happening is very limited. There is a way to get Wireshark to tell you which packets are used in the TLS negotiation, though. At a high level, the TLS setup process involves these steps:

  1. The client and server negotiate security capabilities to determine what to use.
  2. The server transmits digital certificates and public/private key information to the client so that the client can verify the identity of the server.
  3. The client exchanges public/private key information with the server (including a pre-master secret) and may send a digital certificate to show that the client is who it says it is (if the server asked for this).
  4. The server authenticates the client and then the client and server both turn the pre-master secret into master secrets that are used to generate the session key (the same key is generated independently on both sides). This session key is used to encrypt/decrypt communications from here on out.

If you’re deeply curious, you can read more in RFC 2246 (TLS 1.0), RFC 4346 (TLS 1.1), and RFC 5246 (TLS 1.2). Since both the client and server use a public/private key pair as part of this set-up process, you’ll need at least the client’s private key in order for Wireshark to understand the handshake. If you use Open SSL, you can generate and supply this pretty easily:

openssl genrsa -out ~/rsa_key.pem

openssl s_client -crlf -connect -key ~/rsa_key.pem

The first command line creates the key and the second command line is the one shown in the Amazon SES Developer Guide but with a private key supplied. With this connection set up, you can have your SMTP conversations and use the same TCP Dump command as before, only with port 465:

sudo tcpdump -i any -w ~/captures/capture_%Y-%m-%d-%H-%M-%S.cap -G 30 -n -X -Z $USER “port 465”

Then, just transfer the capture file and rsa_key file to your own computer and fire up Wireshark:

TLS Dump example

Note that all you see are SYNs and ACKs. We can fix this. Go to “Edit” -> “Preferences”, expand “Protocols” and then select “SSL”. For the above example, if the rsa_key file is in my home directory I’d put the following in “RSA keys list”:,465,smtp,/home/elton/rsa_key.pem

The first element is the server IP address (visible in the Wireshark GUI), the second element is the server port, the third element is the application protocol, and the last element is the location of the private key file. You’ll see something like this:

TLS Dump

Now you’ll notice in the “Info” column that you can see more information at a high level and the protocol is specified in the previous column as TCP, SSL, or TLSv1 (whereas before it was just TCP). Additionally, the next pane with the packet breakdown has bits like “Secure Socket Layer” or whatever the protocol is that highlights the part of the packet involved. The “Application Data” rows are just encrypted SMTP messages. The real value though is being able to debug any TLS wrapper issues you may have by comparing good negotiations with bad ones or timestamps in good negotiations versus bad ones and getting to the bottom of whatever is going wrong.

We hope that these posts have given you a better understanding of what’s happening behind the scenes when you interact with Amazon SES, and empowered you to better debug problems that you may experience at the transport or network layer. Thanks for being an Amazon SES customer! Happy debugging!

Debugging SMTP Conversations Part 2: Capturing a Live Conversation

by Elton Pinto | on | in How To | Permalink | Comments |  Share

If your email-sending application has problems communicating with the Amazon SES SMTP interface (or your customers are having problems connecting to your SMTP server that proxies requests to Amazon SES), you’ll first probably check your application logs to see what’s going on. If you’re not able to find a smoking gun in your application logs though, what else can you do? Last week, we went over the basics of SMTP conversations and today we’ll explore how you can debug deeper than the application logs.

You may consider setting up an application layer wire log that shows all of the messages you’re sending and receiving, but one unlucky day you may find yourself with a lower-level issue on your hands. It could be a problem in the link between you and your ISP, between your ISP and the next hop, between your application and your kernel, or any number of other things.

A great way to get more data to help you figure out what’s going on is to go lower in the networking stack to the transport layer. Two well-known, freely available tools that can help you with this are TCP Flow and TCP Dump. TCP Flow is a great next step when you just want to see plaintext data packets in a human-readable format, while TCP Dump is more adept at giving you the kitchen sink so to speak (i.e., all the TCP packets in a variety of formats). In today’s post we’ll talk about TCP Flow. Since many of our customers use EC2 Linux-backed instances, we’ll focus on how to use TCP Flow from Linux.

Installing TCP Flow

TCP Flow lets you get your feet wet in transport layer debugging without overwhelming you with data. You can get the latest version using git clone:

sudo yum -y install git

mkdir ~/tcpflow && cd ~/tcpflow

git clone –recursive git://

Currently, the latest version is 1.3, and the steps in the README work on a standard EC2 with a 64-bit AMI (tested on ami-bba18dd2 and ami-2f726546), though you may also need to yum install openssl-devel. If you encounter any problems doing this you can also try downloading the latest version from the GitHub site, though the install instructions may be in the NEWS file instead of README.

If you can run sudo /usr/local/bin/tcpflow -h and see the usage information, then the install was a success and you’re ready to boogie. Otherwise, double check the console output to see if some step failed. You can get more detailed usage information from man tcpflow.

Using TCP Flow

As you can see in the TCP Flow usage information, there are a lot of options to help you toggle what you’re looking for; these can be overwhelming at first glance. Let’s look at a reasonable set of options to start you off on the right track:

sudo /usr/local/bin/tcpflow -i any -g -FT -c port 25 > ~/tcpflow_out

  • The -i option specifies what network interface to listen on (‘any’ is a reasonable default to start you off)
  • The -g option was renamed in a recent version (it used to be -J), but it’s just to give you information in different colors, which you’ll soon see is nice to have.
  • The -c option prints to the console instead of creating individual files. By default, TCP Flow creates two files for each TCP conversation – one file for the packets coming in and one for the packets being transmitted. The -c option can be a useful alternative because the console interleaves the input and output packets.
  • The -F option is all about the format of the output files, and the ‘T’ prepends each file name with an ISO-8601 timestamp. If you output to the console using the -c option, it will still prepend all the lines of your conversation with the timestamp to the millisecond even though you’re not creating any files.
  • The “port 25” bit is a filtering expression, as defined in the pcap filter manual. Depending on what your instance is up to, listening to all traffic can be overwhelming so it’s a good idea to filter on what you care about. You can filter on dozens of things including the source or destination host/port, port ranges, and protocol.

Once you have your TCP Flow output, you can look at it with the color coding preserved (there’s one color for packets sent and one for packets received) using less:

less -R ~/tcpflow_out

You can pipe grep, too, if you’re trying to isolate an incident via a specific source/destination port or address:

grep ~/tcpflow_out | less –R

A TCP Flow Example

If you establish a STARTTLS connection with the Amazon SES SMTP endpoint on port 25 and you use the above TCP Flow command, the output from less might look something like this:

TCP Flow screenshot

You’ll notice that the output is actually readable – there’s a timestamp for each packet in ISO 8601 format followed by the source IP and port of the packet and then the destination IP and port of the packet. You don’t get TCP packet headers or SYN/ACK packets or any of those details, but maybe your problem doesn’t require that much information.

From this point on, however, the conversation will look like gibberish since it’s just a TLS handshake and then all the packets are encrypted. If you use TLS wrapper mode, all the packets will look like gibberish. The nature of TLS makes it tough to decrypt these packets, but TCP Dump and Wireshark will allow us to decrypt at least some of the handshake (we’ll go over these in the next blog post of this series). TCP Flow is still useful on its own, though, if you’re receiving plaintext SMTP conversations from your customers and then proxying messages to Amazon SES for final delivery.

One last thing to note on TCP Flow – you can use the -r option to read in a TCP Dump capture and make it look readable for you.

We hope that you’ve found these tips handy, but the best is yet to come – in the next post of this series we’ll show you how to milk your TCP connections for all the data they’ve got. Thanks again for being a customer!