Category: Amazon CloudFront
I’m still catching up on a couple of launches that we made late last year!
Today’s post covers two services that I’ve written about in the past — AWS Web Application Firewall (WAF) and AWS Application Load Balancer:
AWS Web Application Firewall (WAF) – Helps to protect your web applications from common application-layer exploits that can affect availability or consume excessive resources. As you can see in my post (New – AWS WAF), WAF allows you to use access control lists (ACLs), rules, and conditions that define acceptable or unacceptable requests or IP addresses. You can selectively allow or deny access to specific parts of your web application and you can also guard against various SQL injection attacks. We launched WAF with support for Amazon CloudFront.
AWS Application Load Balancer (ALB) – This load balancing option for the Elastic Load Balancing service runs at the application layer. It allows you to define routing rules that are based on content that can span multiple containers or EC2 instances. Application Load Balancers support HTTP/2 and WebSocket, and give you additional visibility into the health of the target containers and instances (to learn more, read New – AWS Application Load Balancer).
Late last year (I told you I am still catching up), we announced that WAF can now help to protect applications that are running behind an Application Load Balancer. You can set this up pretty quickly and you can protect both internal and external applications and web services.
I already have three EC2 instances behind an ALB:
I simple create a Web ACL in the same region and associate it with the ALB. I begin by naming the Web ACL. I also instruct WAF to publish to a designated CloudWatch metric:
Then I add any desired conditions to my Web ACL:
For example, I can easily set up several SQL injection filters for the query string:
After I create the filter I use it to create a rule:
And then I use the rule to block requests that match the condition:
To pull it all together I review my settings and then create the Web ACL:
Seconds after I click on Confirm and create, the new rule is active and WAF is protecting the application behind my ALB:
And that’s all it takes to use WAF to protect the EC2 instances and containers that are running behind an Application Load Balancer!
To learn more about how to use WAF and ALB together, plan to attend the Secure Your Web Applications Using AWS WAF and Application Load Balancer webinar at 10 AM PT on January 26th.
You may also find the Secure Your Web Application With AWS WAF and Amazon CloudFront presentation from re:Invent to be of interest.
As a follow-up to our recent announcement of IPv6 support for Amazon S3, I am happy to be able to tell you that IPv6 support is now available for Amazon CloudFront, Amazon S3 Transfer Acceleration, and AWS WAF and that all 60+ CloudFront edge locations now support IPv6. We are enabling IPv6 in a phased rollout that starts today and will extend across all of the networks over the next few weeks.
CloudFront IPv6 Support
You can now enable IPv6 support for individual Amazon CloudFront distributions. Viewers and networks that connect to a CloudFront edge location over IPv6 will automatically be served content over IPv6. Those that connect over IPv4 will continue to work as before. Connections to your origin servers will be made using IPv4.
Newly created distributions are automatically enabled for IPv6; you can modify an existing distribution by checking Enable IPv6 in the console or setting it via the CloudFront API:
Here are a couple of important things to know about this new feature:
- Alias Records – After you enable IPv6 support for a distribution, the DNS entry for the distribution will be updated to include an AAAA record. If you are using Amazon Route 53 and an alias record to map all or part of your domain to the distribution, you will need to add an AAAA alias to the domain.
- Log Files – If you have enabled CloudFront Access Logs, IPv6 addresses will start to show up in the c-ip field; make sure that your log processing system knows what to do with them.
- Trusted Signers -If you make use of Trusted Signers in conjunction with an IP address whitelist, we strongly recommend the use of an IPv4-only distribution for Trusted Signer URLs that have an IP whitelist and a separate, IPv4/IPv6 distribution for the actual content. This model sidesteps an issue that would arise if the signing request arrived over an IPv4 address and was signed as such, only to have the request for the content arrive via a different, IPv6 address that is not on the whitelist.
- CloudFormation – CloudFormation support is in the works. With today’s launch, distributions that are created from a CloudFormation template will not be enabled for IPv6. If you update an existing stack, the setting will remain as-is for any distributions referenced in the stack..
- AWS WAF – If you use AWS WAF in conjunction with CloudFront, be sure to update your WebACLs and your IP rulesets as appropriate in order to whitelist or blacklist IPv6 addresses.
- Forwarded Headers – When you enable IPv6 for a distribution, the X-Forwarded-For header that is presented to the origin will contain an IPv6 address. You need to make sure that the origin is able to process headers of this form.
To learn more, read IPv6 Support for Amazon CloudFront.
All existing WAF features will work with IPv6 and there will be no visible change in performance. The IPv6 will appear in the Sampled Requests collected and displayed by WAF:
S3 Transfer Acceleration IPv6 Support
This important new S3 feature (read AWS Storage Update – Amazon S3 Transfer Acceleration + Larger Snowballs in More Regions for more info) now has IPv6 support. You can simply switch to the new dual-stack endpoint for your uploads. Simply change:
Here’s some code that uses the AWS SDK for Java to create a client object and enable dual-stack transfer:
AmazonS3Client s3 = new AmazonS3Client(); s3.setS3ClientOptions(S3ClientOptions.builder().enableDualstack().setAccelerateModeEnabled(true).build());
Most applications and network stacks will prefer IPv6 automatically, and no further configuration should be required. You should plan to take a look at the IAM policies for your buckets in order to make sure that they will work as expected in conjunction with IPv6 addresses.
To learn more, read about Making Requests to Amazon S3 over IPv6.
Don’t Forget to Test
As a reminder, if IPv6 connectivity to any AWS region is limited or non-existent, IPv4 will be used instead. Also, as I noted in my earlier post, the client system can be configured to support IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Therefore, we recommend some application-level testing of end-to-end connectivity before you switch to IPv6.
When I interview a candidate for a technical position, I often ask them to explain what happens when they see a interesting link and decide to click on it. I encourage them to go in to as much detail as they would like. The answers let me know how well they understand and can explain a complex concept. Some candidates will sum up the entire process in a sentence or two. Others have filled an entire whiteboard with detailed diagrams. At a very simple level, here are the steps that I like to see (If you know much about HTTP, you know that I have failed to mention the SSL handshake, cookies, response codes, caches, content distribution networks, and all sorts of other details):
- The domain name is converted to an IP address by way of a DNS lookup.
- A TCP connection is made to the remote server.
- A GET request is issued.
- The remote server locates (or generates) the desired content and returns it to fulfill the request.
- The TCP connection is closed.
- The client processes and displays the result..
A complex web page might contain references to scripts, style sheets, images, and so forth. If this is the case, the entire sequence must be repeated for each reference. On mobile devices, each connection request wakes up the radio, adding hundreds or thousands of milliseconds of overhead (read about Application Network Latency Overhead to learn more).
Amazon CloudFront is a global content distribution network, or CDN. Several of CloudFront’s features help to make the process above more efficient. For example, it caches frequently used content in dozens of edge locations scattered across the planet. This allows CloudFront to respond to requests more quickly and over a shorter distance, thereby improving application performance. With no minimum usage commitments, CloudFront can help you to deliver web sites, videos, and other types of content to your end users in an economical and expeditious way.
New HTTP/2 Support
The retrieval process that I described above contains a lot of room for improvement. Repeated DNS lookups are avoidable, as are TCP connection requests. HTTP/2, a new version of the HTTP protocol, streamlines the process by reusing the TCP connection if possible. This core feature, combined with many other changes to the existing HTTP model, has the potential to reduce latency and to improve the performance of all types of web applications.
Today we are launching HTTP/2 support for CloudFront. You can enable it on a per-distribution basis today and your HTTP/2-aware clients and applications will start to make use of it right away. While HTTP/2 does not mandate the use of encryption, it turns out that all of the common web browsers require the use of HTTPS connections in conjunction with HTTP/2. Therefore, you may need to make some changes to your site or application in order to take full advantage of HTTP/2. Due to the (fairly clever) way in which HTTP/2 works, older clients remain compatible with web endpoints that do support it.
The connection from CloudFront back to your origin server is still made using HTTP/1. You don’t need to make any server-side changes in order to make your static or dynamic content accessible via HTTP/2.
Several AWS customers have already been testing CloudFront’s HTTP/2 support and have seen clear performance improvements. Marfeel is an ad tech platform that helps publishers to create, optimize, and monetize mobile web sites. They told us that CloudFront’s HTTP/2 support has reduced their first-render time by 17%. This allows the sites that they create to consistently load within 0.8 seconds. making them more accessible to mobile readers.
To enable HTTP/2 for an existing CloudFront distribution, simply open up the CloudFront Console, locate the distribution, and click on Edit. Then change the Supported HTTP Versions to include HTTP/2:
The change will be effective within minutes and your users should start to see the benefits shortly thereafter. As I noted earlier, HTTP/2 must be used in conjunction with HTTPS. You can use your browser’s developer tools to verify that HTTP/2 is being used. Here’s what I see when I use the Network tool in Firefox:
You can also add HTTP/2 support to Curl and test from the command line:
$ curl --http2 -I https://d25c7x5dprwhn6.cloudfront.net/images/amazon_fulfilment_center_phoenix.jpg HTTP/2.0 200 content-type:image/jpeg content-length:650136 date:Sun, 04 Sep 2016 23:32:39 GMT last-modified:Sat, 03 Sep 2016 15:21:01 GMT etag:"b95a82b8df7373895a44a01a3c1e6f8d" x-amz-version-id:fgWz_QaWo_.4GF7_VOl0gkBwnOmOALz6 accept-ranges:bytes server:AmazonS3 age:644 x-cache:Hit from cloudfront via:1.1 91e54ea7c5cc54f4a3500c72b19a2a23.cloudfront.net (CloudFront) x-amz-cf-id:Dr_A3emW7OdxWfs3O0lDZfiBFL7loKMFCP9XC0_FYmCkeRuyXcu5BQ==
Here are some resources that you can use to learn more about HTTP/2 and how it can benefit your application:
- Journey to HTTP/2 – I heartily agree with Kamran Ahmed‘s declaration that “HTTP is the protocol that every web developer should know as it powers the whole web and knowing it is definitely going to help you develop better applications.”
- Curl With HTTP Support – Step by step directions that show you how to upgrade cURL so that it can make HTTP/2 requests.
- HTTP/2 – This chapter is part of the High Performance Browser Networking book.
- HTTP/2 Home Page – The primary information source for all things HTTP/2; pay special attention to the HTTP/2 FAQ.
- 7 Tips for Faster HTTP/2 Performance – Some good ideas, but ignore the references to the now-deprecated SPDY.
- HTTP/2 Explained – A detailed reference to HTTP/2 as a technology and as a protocol.
This feature is available now and you can start using it today at no extra charge.
With a long feature list (powered in large part by customer requests) Amazon CloudFront is well-suited to delivering your static, dynamic, and interactive content to users all over the world at high speed and with low latency. As part of the AWS Free Tier, you can handle up to 2 million HTTP and HTTPS requests and transfer up to 50 GB of data each month at no charge.
I am happy to announce that we are adding CloudFront edge locations in Toronto and Montreal in order to better serve our users in the region, bringing the global count up to 59 (full list). This includes a second edge location in São Paolo, Brazil that we recently brought online. Pricing for the locations in Toronto and Montreal is the same as for our US edge locations (see CloudFront Pricing for more info). The edge locations in Canada fall within Price Class 100.
If your application already uses CloudFront you need not do anything special in order to take advantage of the new locations. Your users will enjoy fast, low-latency access to your static, dynamic, or streamed content regardless of their location. As a developer, you will find CloudFront to be simple to use as well as cost-effective. Because it is elastic, you don’t need to over-provision in order to handle unpredictable traffic loads.
Before you ask, these new locations will also support Amazon Route 53 in the future. Again, you won’t need to do anything special in order to take advantage of the new locations!
PS – You can learn more about CloudFront at our monthly Office Hours (register now). The next session will be held at 10 AM PT on August 30th, 2016.
Grâce à ses nombreuses fonctionnalités (développées en partie à la demande des clients) Amazon CloudFront est parfaitement adapté pour offrir un contenu statique, dynamique et interactif à haut débit et faible latence aux utilisateurs du monde entier. Dans le cadre du niveau gratuit AWS, vous pouvez traiter jusqu’à deux millions de requêtes HTTP et HTTPS, et transférer gratuitement jusqu’à 50 Go de données par mois.
Afin de mieux répondre à nos utilisateurs, j’ai le plaisir d’annoncer l’ajout d’emplacements périphériques Amazon CloudFront à Toronto et Montréal, portant ainsi leur nombre total à 59 (liste complète). Cela comprend la mise en service récente d’un second emplacement périphérique à São Paulo, Brésil. La tarification pour les emplacements à Toronto et Montréal est la même que pour nos emplacements périphériques aux USA (pour en savoir plus consultez la Tarification CloudFront). Les emplacements périphériques au Canada relèvent de la catégorie de tarifs 100.
Si votre application utilise déjà CloudFront, aucune action supplémentaire n’est nécessaire pour que vous profitiez des nouveaux emplacements. Vos utilisateurs apprécieront la rapidité daccès avec faible latence à votre contenu statique, dynamique ou diffusé, quel que soit le lieu où ils se trouvent. En tant que développeur, CloudFront vous paraîtra simple d’utilisation et économique. CloudFront étant un produit élastique, il ne vous sera pas nécessaire de surenchérir pour gérer les pics de trafic.
Pour votre information, ces nouveaux emplacements prendront également en charge Amazon Route 53 dans le futur. Encore une fois, aucune action supplémentaire n’est nécessaire pour que vous profitiez de ces nouveaux emplacements !
PS – Vous pourrez en savoir plus sur CloudFront durant nos sessions mensuelles (Inscrivez-vous maintenant). La prochaine session aura lieu à 10 h HAP le 30 août 2016.
Amazon CloudFront can be used to deliver static and dynamic content using a global network of edge locations. You can set it up in minutes and give your customers the benefit of fast, low-latency access to your web site, movies, music, and so forth.
Each CloudFront distribution references one or more origins (web servers or S3 buckets). When CloudFront needs content that is not cached at an edge location, it makes a request to the appropriate origin, as determined by a set of mappings (behaviors) that are also specified within the distribution.
Today we are launching three new features that will give you additional control over the connection between CloudFront and your origins:
- Support for TLS v1.1 and v1.2
- HTTPS-only connection
- Control of edge-to-origin request headers
Support for TLS v1.1 and v1.2
We have added TLS v1.1 and TLS v1.2 to the list of protocols that you can configure between the edge and a custom origin. With this change, you can now configure CloudFront to use SSLv3, TLS v1.0, v1.1, and v1.2 for each custom origin you set up for a CloudFront distribution.
You can now configure CloudFront to always use HTTPS while connecting to your origin, regardless of the protocol (HTTP or HTTPS) that was used to connect to the edge. Previously, CloudFront connected to the origin using the same protocol (HTTP or HTTPS) that was used to connect to the edge. When you enable this new feature, both HTTP and HTTPS requests from the viewer will be sent to the origin using HTTPS.
Here is how you configure the desired protocols and HTTP-Only for a custom origin:
Control of Edge-to-Origin Request Headers
You can now configure CloudFront to add custom headers or override the value of existing request headers when CloudFront forwards requests to your origin. You can use these headers to help validate that requests made to your origin were sent from CloudFront (shared secret) and configure your origin to only allow requests that contain the custom header values that you specify.
For Cross-Origin Request Sharing (CORS), you can configure CloudFront to always supply the applicable headers to your origin to accommodate viewers that don’t automatically include those headers in requests. This also allows you to disable varying on the Origin header, which increases your cache hit ratio.
Here’s how you would add new headers named X-CloudFront-Distribution-Id and X-Shared-Secret:
These features are available now and you can start using them today at no additional cost. To learn more, read the CloudFront Developer Guide.
Amazon CloudFront helps you to get your content to your users at high speed with low latency.
Today we are making CloudFront even better with the addition of support for Gzip compression. After you enable it for a particular CloudFront distribution, text and binary content will be compressed at the edge and returned in response to requests that indicate that compressed content is preferred (most modern browsers do this automatically).
Your pages will load more quickly, content will download faster, and your CloudFront data transfer charges may be reduced as well. For a typical web page composed of a mix of text, scripts, and images, the overall payload reduction can approach 80%.
I tested this new feature on this very blog! Here is the data transfer without compression:
And here it is with compression:
As you can see from the browser’s status bar, Gzip compression reduced total download size from 792 KB to 177 KB (a 77% reduction). Download time was reduced from 846 ms to 446 ms (almost 50%).
Enabling Gzip Compression
You can enable this feature in a minute! Simply open up the CloudFront Console, locate your distribution, and set Compress Objects Automatically to Yes in the Behavior options:
To learn more, read about Serving Compressed Files.
This feature is available now and you can start using it today! There is no extra charge for the compression; your CloudFront data transfer charges may actually go down (the specifics depend on the proportion of compressed to uncompressed requests, of course).
Have you ever taken the time to watch the access and error logs from your web server scroll past? In addition to legitimate well-formed requests from users and spiders, you will probably see all sorts of unseemly and downright scary requests far too often. For example, I checked the logs for one of my servers and found that someone or something was looking for popular packages that are often installed at well-known locations (I have changed the source IP address to 10.11.12.217 for illustrative purposes):
If any of those probes had succeeded, the attacker could then try a couple of avenues to gain access to my server. They could run through a list of common (or default) user names and passwords, or they could attempt to exploit a known system, language, or application vulnerability (perhaps powered by SQL injection or cross-site request forgery) as the next step.
Like it or not, these illegitimate requests are going to be flowing in 24×7. Even if you keep your servers well-patched and do what you can to keep the attack surface as small as possible, there’s always room to add an additional layer of protection.
New AWS WAF
In order to help you to do this, we are launching AWS WAF today. As you will see when you read this post, AWS WAF will allow you to protect your AWS-powered web applications from application-layer attacks such as those I described above.
You can set it up and start protecting your applications in minutes. You simply create one or more web Access Control Lists (web ACLs), each containing rules (set of conditions defining acceptable or unacceptable requests/IP addresses) and actions to take when a rule is satisfied. Then you attach the web ACL to your application’s Amazon CloudFront distribution.
From that point forward, incoming HTTP and HTTPS requests that arrive via the distribution will be checked against each rule in the associated web ACL. The conditions with the rules can be positive (allow certain requests or IP addresses) or negative (block certain requests or IP addresses).
I can use the rules and the conditions in many different ways. For example, I could create a rule that would block all access from the IP address shown above. If I were getting similar requests from many different IP addresses, I could choose to block on one or more strings in the URI such as “/typo3/” or “/xampp/.” I could also choose to create rules that would allow access to the actual functioning URIs within my application, and block all others. I can also create rules that guard against various forms of SQL injection.
AWS WAF Concepts
Let’s talk about conditions, rules, web ACLs, and actions. I’ll illustrate some of my points with screen shots of the AWS WAF console.
Conditions inspect incoming requests. They can look at the request URI, the query string, a specific HTTP header, or the HTTP method (GET, PUT, and so forth):
Because attackers often attempt to camouflage their requests in devious ways, conditions can also include transformations that are performed on the request before the content is inspected:
Conditions can also look at the incoming IP address, and can match a /8, /16, or /24 range. They can also use a /32 to match a single IP address:
Rules reference one or more conditions, all of which must be satisfied in order to make the rule active. For example, one rule could reference an IP-based rule and a request-based rule in order to block access to certain content. Each rule also generates Amazon CloudWatch metrics.
Actions are part of rules, and denote the action to be taken when a request matches all of the conditions in a rule. An action can allow a request to go through, block it, or simply count the number of times that the rule matches (this is good for evaluating potential new rules before using a more decisive action).
Web ACLs in turn reference one or more rules, along with an action for each rule. Each incoming request for a distribution is evaluated against successive rules until a request matches all of the conditions in the rule, then the action associated with the rule is taken. If no rule matches, then the default action (block or allow the request) is taken.
WAF in Action
Let’s go through the process of creating a condition, a rule, and a web ACL. I’ll do this through the console, but you can also use the AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Web Application Firewall API.
The console leads me through the steps. I start by creating a web ACL called ProtectSite:
Then I create conditions that will allow or block content:
I can create an IP match condition called BadIP to block the (fake) IP address from my server log:
And then I used the condition to create a rule called BadCompany:
And now I select the rule and chose the action (a single web ACL can use multiple rules; my example uses just one):
As you can see above, the default action is to allow requests through. The net effect is that this combination (condition + rule + web ACL) will block incoming traffic from 10.11.12.217 and allow everything else to go through.
The next step is to associate my new web ACL with a CloudFront distribution (we’ll add more services over time):
A single web ACL can be associated with any number of distributions. However, each distribution can be associated with one web ACL.
The web ACL will take effect within minutes. I can inspect its CloudWatch metrics to understand how often each rule and each web ACL is activated.
Everything that I have shown you above can also be accessed from your own code:
CreateSqlInjectionMatchSetare used to create conditions.
CreateRuleis used to create rules from conditions.
CreateWebACLis used to create web ACLs from rules.
UpdateWebACLis used to associate a web ACL with a CloudFront distribution.
There are also functions to list, update, and delete conditions, rules, and web ACLs.
GetSampledRequests function gives you access to up to 5,000 of the requests that were evaluated against a particular rule within a time period that you specify. The response includes detailed information about each of the requests, including the action taken (ALLOW, BLOCK, or COUNT).
AWS WAF is available today anywhere CloudFront is available. Pricing is $5 per web ACL, $1 per rule, and $0.60 per million HTTP requests.
All About King
The company was founded in 2003. At that time, they built web games easily accessed from the various portal sites that were ever so popular at that time. Around 2009 their initial experiments with Facebook games were a success, as were subsequent Facebook-centric mobile games. With the handwriting on the wall becoming clear, they built socially connected, cross-platform games that ran equally well on the web and mobile devices, allowing their users to pick up the nearest convenient device and resume playing while maintaining their progression in the games. The goal is to let users connect and play anywhere, at any time, on any device.
As of the first quarter of 2015, their 364 million players come from over 200 countries and rack up 1.6 billion game plays per day. Many of these plays occur in short bursts while the player is in the subway, waiting for a dentist appointment, or waiting in line. The players (many of them in the over-24 age group) don’t think of themselves as gamers, but they do enjoy being entertained by casual game play. Players come from all over the world — many from the United States and Europe, with other markets including Japan, China, and Korea (amongst others) also important. The game logic is universal, but promotional activities and in-game events are tailored to local calendars and markets.
We talked at length regarding the challenges that come with the need to deliver game content to a global user base. Some players are on older mobile networks (2G or 3G), with limited bandwidth, modest amounts of peering, and intermittent connectivity. Regardless of location or technology, players need fresh content so that they can use the game instead of waiting for it to load.
The team at King chose Amazon CloudFront as the content delivery vehicle for their games. The factors that influenced this decision included:
Global Reach – They are able to take advantage of CloudFront’s global network of edge locations. With locations in the US, Europe, Asia, Australia, and South America, CloudFront is able to deliver content efficiently to users all over the world.
Platform Features – They appreciate the fact that the CloudFront service gains new features on a regular and frequent basis (see CloudFront What’s New for a comprehensive list).
API Access – The developers and operators at King are able to use the CloudFront API to manage game content as an integrated part of their application release cycle. This self-service model allows them to get fresh, consistent content out to their users as quickly as possible.
Cost-Effectiveness – Because AWS does not charge for data transfers between an AWS-hosted origin (Amazon Simple Storage Service (S3) or Amazon Elastic Compute Cloud (EC2)) and CloudFront, origin fetches are cost-effective.
Scale – CloudFront currently delivers hundreds of terabytes of content for King every day, with spikes to half of a petabyte or more when they launch a new game or initiate a large-scale marketing program.
Performance – With a goal of providing the best possible user experience (which they called “bite-size moments of magic” on our call), they track latency, responsiveness, and load times for each game. All of the metrics improved when they switched to CloudFront.
To learn more about CloudFront, read the Getting Started Guide.
The scale of AWS makes it possible for us to take on projects that could be too large, too complex, too expensive, or too time-consuming for our customers. This is often the case for issues in security, particularly in the world of compliance. Establishing and documenting the proper controls, preparing the necessary documentation, and then seeking the original certification and periodic re-certifications take money and time, and require people with expertise and experience in some very specific areas.
From the beginning, we have worked to demonstrate that AWS is compliant with a very wide variety of national and international standards including HIPAA, PCI DSS Level 1, ISO 9001, ISO 27001, SOC (1, 2, and 3), FedRAMP, DoD CSM (to name a few).
In most cases we demonstrate compliance for individual services. As we expand our service repertoire, we likewise expand the work needed to attain and maintain compliance.
PCI Compliance for CloudFront
Today I am happy to announce that we have attained PCI DSS Level 1 compliance for Amazon CloudFront. As you may already know, PCI DSS is a requirement for any business that stores, processes, or transmits credit data.
Our customers that use AWS to implement and host retail, e-commerce, travel booking, and ticket sales applications, can take advantage of this, as can those that provide apps with in-app purchasing features. If you need to distribute static or dynamic content to your customers while maintaining compliance with PCI DSS as part of such an application, you can now use CloudFront as part of your architecture.
Other Security Features
In addition to PCI DSS Level 1 compliance, a number of other CloudFront features should be of value to you as part of your security model. Here are some of the most recent features:
HTTP to HTTPS Redirect – You can use this feature to enforce an HTTPS-only discipline for access to your content. You can restrict individual CloudFront distributions to serve only HTTPS content, or you can configure them to return a 301 redirect when a request is made for HTTP content.
Signed URLs and Cookies – You can create a specially formatted “signed” URL that includes a policy statement. The policy statement contains restrictions on the signed URL, such as a time interval which specifies a date and time range when the URL is valid, and/or a list of IP addresses that are allowed to access the content.
Advanced SSL Ciphers – CloudFront supports the latest SSL ciphers and allows you to specify the minimum acceptable protocol version.
OCSP Stapling – This feature speeds up access to CloudFront content by allowing CloudFront to validate the associated SSL certificate in a more efficient way. The effect is most pronounced when CloudFront receives many requests for HTTPS objects that are in the same domain.
Perfect Forward Secrecy -This feature creates a new private key for each SSL session. In the event that a key was discovered, it could not be used to decode past or future sessions.
Other Newly Compliant Services
Along with CloudFront, AWS CloudFormation, AWS Elastic Beanstalk, and AWS Key Management Service (KMS) have also attained PCI DSS Level 1 compliance. This brings the total number of PCI compliant AWS services to 23.
Until now, you needed to manage your own encryption and key management in order to be compliant with sections 3.5 and 3.6 of PCI DSS. Now that AWS Key Management Service (KMS) is included in our PCI reports, you can comply with those sections of the DSS using simple console or template-based configurations that take advantage of keys managed by KMS. This will save you from doing a lot of heavy lifting and will make it even easier for you to build applications that manage customer card data in AWS.
Use it Now
There is no additional charge to use CloudFront as part of a PCI compliant application. You can try CloudFront at no charge as part of the AWS Free Tier; large-scale, long-term applications (10 TB or more of data from a single AWS region) can often benefit from CloudFront’s Reserved Capacity pricing.
Amazon CloudFront makes it easy for you to distribute content to end users with low latency, high data transfer speeds, and no minimum usage commitments.
In order to make content available with low latency, CloudFront caches objects at each of its 53 (as of this writing) edge locations. Today we are making an update to CloudFront that will provide you with additional control over the caching behavior at each edge location. You already have the ability to set the minimum length of time (commonly known as the Minimum Time to Live, or Min TTL) that CloudFront should cache each object at the edge.
You can now configure the maximum value (Max TTL) as well as a default value (Default TTL) for the length of time that CloudFront should cache your objects. These settings apply on a per-behavior basis (recall that each of your CloudFront distributions can have one or more behaviors, each of which applies to a set of objects which match a regular expression). For example, you can set distinct TTL values for web pages (*.html) and PNG images (*.png). In many cases, it is easier to set these values at the behavior level rather than modifying the application to consistently generate the proper cache-control headers (you do, however, still have that option).
You can use these options in many interesting ways! For example, if you don’t set any cache control header on your origin, you can use the Default TTL to specify the cache duration for the edge locations. Or, you can completely override the cache-control header set by origin by setting all three of the values (Min, Max, and Default) to the same value.
I can select and then edit a behavior in order to exercise additional control over the TTL settings:
This feature is available now and you can start using it today at no additional charge. To learn more, read about Specifying How Long Objects Stay in a CloudFront Edge Cache in the CloudFront documentation.