|Mar 06, 2014||
Amazon DynamoDB automatically replicates your data three-ways within a region. Now you can also backup your data across regions in a few steps via the Management Console. The Cross-Region Export/Import console feature enables you to back up the data from your DynamoDB tables to another AWS region, or within the same region, using AWS Data Pipeline, Amazon Elastic MapReduce (EMR), and Amazon S3. This feature allows you to set up the export/import without having to manually create the pipeline or manually provision and maintain the EMR cluster.
To learn more about this new feature, please visit our blog.
To get started, visit:
|Mar 06, 2014||
We are excited to announce a new feature for Elastic Load Balancing: Access Logs. This feature records all requests sent to your load balancer, and stores the logs in Amazon S3 for later analysis.
With Access Logs, you can obtain request-level details in addition to the existing load balancer metrics provided via Amazon CloudWatch. The logs contain information such as client IP address, request path, latencies, and server responses. You can use this data to pinpoint when application errors occurred or response times increased, and which requests were impacted.
These logs can also be used for web analytics to determine popular pages, page trends over time, or unique visitors. To make this analysis easier, we have integrated with Amazon Elastic MapReduce as well as analytics tools from our partners, Splunk and Sumo Logic.
To learn more about Access Logs, please see the documentation.
|Mar 05, 2014||
We are excited to announce that you can now use your own SSL certificates with Amazon CloudFront at no additional charge with Server Name Indication (SNI) Custom SSL. SNI is supported by most modern browsers, and provides an efficient way to deliver content over HTTPS using your own domain and SSL certificate. There are no additional certificate management fees to use this feature; you simply pay normal Amazon CloudFront rates for data transfer and HTTPS requests.
SNI Custom SSL relies on the SNI extension of the Transport Layer Security protocol, which allows multiple domains to serve SSL traffic over the same IP address by including the hostname viewers are trying to connect to. Amazon CloudFront delivers your content from each edge location and offers the same security as the Dedicated IP Custom SSL feature. SNI Custom SSL works with most modern browsers, including Chrome version 6 and later (running on Windows XP and later or OS X 10.5.7 and later), Safari version 3 and later (running on Windows Vista and later or Mac OS X 10.5.6. and later), Firefox 2.0 and later, and Internet Explorer 7 and later (running on Windows Vista and later). Some users may not be able to access your content because some older browsers do not support SNI and will not be able to establish a connection with CloudFront to load the HTTPS version of your content. If you need to support non-SNI compliant browsers for HTTPS content, we recommend using our Dedicated IP Custom SSL feature.
Set up is easy: simply follow the instructions outlined in the CloudFront Developer Guide and start serving your content quickly and securely.
You can also now configure Amazon CloudFront to require viewers to interact with your content over an HTTPS connection using the HTTP to HTTPS Redirect feature. When you enable HTTP to HTTPS Redirect, CloudFront will respond to an HTTP request with a 301 redirect response requiring the viewer to resend the request over HTTPS. There are no additional charges for using HTTP to HTTPS Redirect, but standard request fees apply.
|Mar 05, 2014||
VM Import now supports virtual machines running Windows Server 2012. You can import VMs running Windows Server 2012 R1 (Datacenter or Standard edition) to Amazon EC2 from VMware ESX and VMware Workstation VMDK images, Citrix Xen VHD images, and Microsoft Hyper-V VHD images. Once imported, these VMs will run as Windows instances within Amazon EC2. You can also export previously imported EC2 instances running Windows 2012 to VMware ESX VMDK, VMware ESX OVA, Microsoft Hyper-V VHD or Citrix Xen VHD file formats using VM Export.
In addition to adding Windows 2012 R1 support, VM Import has also enhanced the import experience for Windows Server 2003 and Windows Server 2008 VMs. Amazon EC2 instances created from these VMs will now benefit from having the EC2Config service installed and from having the latest-generation Citrix PV drivers .
VM Import can help you migrate your existing workloads, Enterprise applications, or VM catalog to Amazon EC2. In addition to the newly-added support for Windows Server 2012 VMs, you can also import Windows Server 2003, Windows Server 2008, Red Hat Enterprise Linux (RHEL) 5.1-6.5 (using Cloud Access), Centos 5.1-6.5, Ubuntu 12.04, 12.10, 13.04, 13.10, and Debian 6.0.0-6.0.8, 7.0.0-7.2.0 VMs to EC2 using VM Import. You can also use VM Export to export your previously imported Linux and Windows VMs.To learn more about VM Import, please visit: http://aws.amazon.com/ec2/vm-import/.
|Mar 04, 2014||
Red Hat Enterprise Linux is now available in the AWS GovCloud (US) Region. Amazon Web Services (AWS) and Red Hat® have teamed to offer Red Hat Enterprise Linux on Amazon EC2, a complete, enterprise-class computing environment for running business-critical applications and workloads.
Red Hat maintains the base Red Hat Enterprise Linux images for Amazon EC2. AWS customers receive updates at the same time that updates are made available from Red Hat, so your computing environment remains reliable and secure and your Red Hat Enterprise Linux-certified applications maintain their supportability.
You can launch Red Hat Enterprise Linux directly from the Amazon EC2 Launch Wizard in the Management Console for the AWS GovCloud (US) Region using your AWS GovCloud credentials.
Learn more about the AWS GovCloud (US) Region or join us for our weekly AWS GovCloud (US) Office Hours every Tuesday at 1:00 – 2:00 PM EST and the Intro to AWS GovCloud (US) Region webinar on March 12th, 1:30 – 2:30 PM EST to learn more.
|Mar 03, 2014||
You can now use AWS OpsWorks and AWS CloudFormation together to manage applications on AWS. AWS CloudFormation enables modeling, provisioning and version-controlling of a wide range of AWS resources. AWS OpsWorks is an application management service that simplifies software configuration, application deployment, scaling, and monitoring.
You can now model OpsWorks components (stacks, layers, instances, and applications) inside CloudFormation templates, and provision them as CloudFormation stacks. This enables you to document, version control, and share your OpsWorks configuration. You have the flexibility to provision OpsWorks components and other related AWS resources such as Amazon VPC and AWS Elastic Load Balancer with a unified CloudFormation template or separate CloudFormation templates.
Here is a sample CloudFormation template provisioning an OpsWorks PHP application, and here is a sample CloudFormation template provisioning a load balanced OpsWorks application inside a VPC. Please refer to the documentation for details.
|Mar 03, 2014||
We’re excited to announce three new features for members of AWS Activate:
To learn more and sign up for AWS Activate, click here.
|Feb 28, 2014||
The AWS Command Line Interface now fully supports AWS Data Pipeline commands. You can create and manage data processing workflows in AWS Data Pipeline using the same familiar tool you use to manage other AWS services, including data sources such as Amazon S3, Amazon DynamoDB, Amazon RDS, and Amazon Redshift. The CLI lets you easily create and update pipelines with your existing pipeline definitions, query for specific objects and attributes within a definition or other command outputs using the --query option, and write scripts to automate your processes.
|Feb 27, 2014||
We are excited to announce support for multi-factor authentication (MFA) protection for cross-account access.
MFA is a security best practice that adds an extra layer of protection to your AWS account. It requires users to present two independent credentials: what the user knows (password or secret access key) and what the user has (MFA device). IAM already supports adding MFA protection when you grant access to users within a single AWS account. With today’s announcement, you can add similar protection when granting access to users across accounts, by requiring them to authenticate with MFA before assuming an IAM role.
For more information, visit the Configuring MFA-Protected API Access section in the Using IAM guide.
|Feb 27, 2014||
We are pleased to announce a number of new features and design improvements to the Amazon EC2 Management Console. These changes follow updates we made in late 2013 when we introduced the new Launch Instance Wizard, AWS Marketplace integration and a refreshed look and feel to parts of the console. Now, we have updated the console with the new design consistently throughout, including in Events, Spot Requests, Bundle Tasks, Volumes, Snapshots, Security Groups, Placement Groups, Load Balancers and Network Interfaces.
We've also added new features that make it easier to manage your EC2 resources:
You can find more details about these new features in Jeff Barr's blog post.
To access these features and experience the new look and feel, just visit the EC2 Management Console and navigate to one of the updated sections, and you will be presented with the option to try out the new console. Don't forget to let us know what you think via the console feedback button!
|Feb 25, 2014||
The new CloudFront Content Delivery Optimization check in AWS Trusted Advisor can help you optimize the delivery of popular content from Amazon Simple Storage Service (Amazon S3).
This new check calculates the ratio of data transferred out to data stored in your Amazon S3 bucket to identify cases where you could improve performance and perhaps save money by delivering your content with Amazon CloudFront instead of directly from Amazon S3.
When you use Amazon CloudFront to deliver this content, requests for your content are automatically routed to the nearest edge location, and your content is cached so it can be delivered with lower latency. When the data transferred is more than 10 TB per month, pricing for Amazon CloudFront is lower than for data transferred directly from Amazon S3, meaning you can also save money.
To date, AWS Trusted Advisor provides 32 checks of your AWS environment and makes recommendations to reduce cost, improve system performance and reliability, or help close security gaps. You can stay up-to-date with your AWS resource deployment via weekly AWS Trusted Advisor notifications. Up to 3 recipients can get weekly status updates and savings estimations in English or Japanese. Learn more about setting up notifications on Jeff Barr's Blog and FAQ.
For more information on AWS Trusted Advisor and the other 31 checks covering major AWS services, including Amazon EC2, Amazon S3, Amazon EBS, Amazon RDS, Elastic Load Balancing, Amazon Route 53, and Amazon CloudFront, visit AWS Trusted Advisor. Read more about Amazon CloudFront and Amazon Simple Storage Service.
|Feb 20, 2014||
AWS Data Pipeline is now supported in four additional regions as follows:
Even though AWS Data Pipeline was earlier based out of US East (Northern Virginia) Region or us-east-1, it always supported cross-region data flows. The new regions will help reduce the service latency and provide greater redundancy for customers. The pricing for new regions will be same as that for us-east-1.
AWS Data Pipeline helps customers move, integrate, and process data across AWS compute and storage resources, as well as customer’s on-premises resources. To get started with AWS Data Pipeline for free, visit the AWS Data Pipeline detail page.
|Feb 20, 2014||
We’re excited to let you know that Amazon CloudFront now supports a new option for streaming on-demand video to your end users: Microsoft Smooth Streaming. You can now use CloudFront to deliver video using the Smooth Streaming format without the need to setup and operate any media servers. This adds Smooth Streaming to the set of video streaming technologies that CloudFront supports, which includes native on-demand streaming using HLS and multi-format live streaming using third party media servers such as Wowza and Adobe.
There are no additional charges for using this feature. As with other Amazon CloudFront features, you pay only for what you use and there are no upfront fees or minimum monthly usage commitments. You can learn more about Smooth Streaming using Amazon CloudFront by visiting the Amazon CloudFront streaming page or by reading the Amazon CloudFront Developer Guide.
You can also join our webinar at 11:00 AM Pacific (UTC-7) on March 19, 2014 to learn more about video streaming using Smooth Streaming over Amazon CloudFront and other Amazon CloudFront media specific capabilities that enable you to deliver your content at scale to a global audience. Register for this webinar.
|Feb 20, 2014||
We are pleased to announce the immediate availability of a set of new M3 database instances for Amazon RDS. These new instances provide you with a similar ratio of CPU and memory resources as our previous M1 database instances but offer 50% more computational capability/core, significantly improving overall compute capacity. These new instances are priced about 6% lower than M1 instances, providing you with significantly higher and more consistent compute performance at a lower price.
These new instances are available for all database engines and in all AWS regions, with AWS GovCloud (US) support coming in the future.
Out of these new DB instances, the db.m3.xlarge and db.m3.2xlarge are optimized for Provisioned IOPS storage. For a workload with 50% writes and 50% reads running on db.m3.2xlarge instance type, it is possible to realize up to 12,500 IOPS for MySQL and 25,000 IOPS for Oracle and PostgreSQL. Refer to the Provisioned IOPS storage section of the User Guide to learn more.
Pricing for M3 database instances start at $0.053/hour (effective price) for 3 year Heavy Utilization RI and $0.150/hour for On-demand usage in the US West (Oregon) Region for MySQL. For more information on pricing, visit the Amazon RDS pricing page.
|Feb 20, 2014||
We are pleased to announce the release of the Amazon Elastic MapReduce (Amazon EMR) Connector to Amazon Kinesis. Kinesis can collect data from hundreds of thousands of sources, such as web site click-streams, marketing and financial information, manufacturing instrumentation, social media and more. This connector enables batch processing of data in Kinesis streams with familiar Hadoop ecosystem tools such as Hive, Pig, Cascading, and standard MapReduce. You can now analyze data in Kinesis streams without having to write, deploy and maintain any independent stream processing applications.
You can use this connector, for example, to write a SQL query using Hive against a Kinesis stream or to build reports that join and process Kinesis stream data with multiple data sources such as Amazon Dynamo DB, Amazon S3 and HDFS. You can build reliable and scalable ETL processes that filter and archive Kinesis data into permanent data stores including Amazon S3, Amazon DynamoDB, or Amazon Redshift.
To facilitate end-to-end log processing scenarios using Kinesis and EMR, we have created a Log4J Appender that streams log events directly into a Kinesis stream, making the log entries available for processing in EMR. You can get started today by launching a new EMR cluster and using the code samples provided in the tutorials and FAQs. If you’re new to Kinesis you can learn more by visiting the Kinesis detail page.
|Feb 19, 2014||
We have made several enhancements to Elastic Load Balancing to further improve the security of your application traffic, making it easier for you to better protect end users’ confidential data and privacy.
You can now use these new security features:
You can configure these new features with the AWS Management Console, API, or Command Line Interface (CLI).
To learn more about these new features, see the documentation.
|Feb 18, 2014||
We are excited to announce two new features for Route 53 health checks and DNS Failover: fast interval health checks and configurable failover thresholds.
With fast interval health checks, Route 53 performs health check observations of your endpoint (for example, a web server) every 10 seconds instead of the default interval of 30 seconds. This enables Route 53 to confirm more quickly that an endpoint is unavailable and shortens the time required for DNS Failover to redirect traffic.
Configurable failover thresholds let you specify the number of consecutive health check observations required for Route 53 to confirm that an endpoint has switched from a healthy to unhealthy state, or vice versa, from 1 to 10 observations. You can select a lower threshold in order to fail over more quickly after an endpoint becomes unavailable, or a higher threshold to prevent traffic from being redirected in response to temporary or transient events.
You can use health checks along with Route 53's DNS Failover feature to help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly. You can also use health checks to monitor your website's availability. Route 53 health checks are integrated with Amazon CloudWatch, so you can view the current and past status of your health checks, or configure alarms and notifications.
|Feb 12, 2014||
G2 Instances are now available in our Asia Pacific (Tokyo) Region. G2 instances are designed for applications that require 3D graphics capabilities. The instance is backed by a high-performance NVIDIA GPU, making it ideally suited for video creation services, 3D visualizations, streaming graphics-intensive applications, and other server-side workloads requiring massive parallel processing power. With this new instance type, customers can build high-performance DirectX, OpenGL, CUDA, and OpenCL applications and services without making expensive up-front capital investments.
Customers can launch G2 instances using the AWS console, Amazon EC2 command line interface, AWS SDKs and third party libraries. To learn more about G2 instances, visit Amazon EC2 details page. To get started immediately, visit the AWS Marketplace for GPU machine images from NVIDIA and other Marketplace sellers.
|Feb 10, 2014||
AWS CloudFormation now supports provisioning Amazon Redshift clusters and updating AWS Elastic Beanstalk applications. AWS CloudFormation is a service that simplifies provisioning and management of a wide range of AWS resources.
You can now model a Redshift cluster configuration in a CloudFormation template file and have CloudFormation launch the cluster with a few clicks or CLI commands. The template enables you to version control, replicate, or share your Redshift configuration. Here is a sample CloudFormation template that provisions a Redshift cluster.
Previously, you could provision an Elastic Beanstalk application as part of a CloudFormation stack, as shown in this sample template which provisions an Elastic Beanstalk application. Now as well as provisioning, you can also update an Elastic Beanstalk application by updating the associated CloudFormation template and the stack.
To learn more about AWS CloudFormation, please see our detail page, documentation or watch this introductory video. We also have a large collection of sample templates that makes it easy to get started with CloudFormation within minutes.
|Jan 30, 2014||
We are excited to announce two new health-checking features for Amazon Route 53: string matching and HTTPS support.
When you enable Route 53 health checks, Route 53 regularly makes Internet requests to your application's endpoints—for example, web or application servers—from multiple locations around the world to determine whether the endpoint is available.
With string matching health checks, Route 53 searches the body of the response that your endpoint returns. If the response body (typically a web page) contains a string that you specify, Route 53 considers the endpoint healthy. Using string matching health checks, you can help ensure that your web application is serving the correct content, in addition to verifying that the web server is running and reachable over the Internet.
With HTTPS support, you can now create health checks for secure websites that are available only over SSL, to confirm that the web server is available and responding to requests over HTTPS. You can also combine string matching health checks with HTTPS health checks to verify that your secure website is returning the correct content.
You can use health checks along with Route 53's DNS failover feature. With DNS failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly. Using Route 53 DNS failover, you can run your primary application simultaneously in multiple AWS regions around the world. Route 53 automatically removes from service any region where your application is unavailable. You can also take advantage of a simple backup site hosted on Amazon Simple Storage Service (Amazon S3), with Route 53 directing users to this backup site in the event that your application becomes unavailable.
You can also use health checks to monitor your website's availability. Route 53 health checks are integrated with Amazon CloudWatch, and you can view current and past statuses of your health checks, or configure alarms and notifications.
|Jan 29, 2014||
In addition to US East (Northern Virginia), Amazon Simple Email Service (Amazon SES) is now available in two further AWS regions: EU (Ireland) and US West (Oregon).
Support for these additional regions means that you can reduce the network latency of your email-sending application by choosing to use the Amazon SES endpoint in the AWS region that is closest to your application. And, if you are in Europe, you can now have an email delivery service hosted entirely in the EU Region.
|Jan 29, 2014||
We are excited to announce that Amazon SQS (SQS) now offers Dead Letter Queues (DLQ). Now, with DLQs, you can designate special queues to collect messages that could not be delivered after a given number of attempts. With DLQs you can write applications or assign people to more effectively analyze and understand why messages cannot be delivered. This helps you troubleshoot events that may be impacting your end users or other systems.
|Jan 29, 2014||
We are excited to announce the immediate availability of EC2 Usage Reports, which are designed to make it easier for you to track and better manage your EC2 usage and spending. You can use these interactive usage reports to view your historical EC2 instance usage, and help you plan for future EC2 usage. There are currently two reports available:
You can easily access these reports by visiting the Reports section of the billing console. You can customize the reports, and bookmark them for easy access in the future. For more information about the reports, see the EC2 Users Guide.
|Jan 28, 2014||
We are pleased to make the Amazon Kinesis Storm Spout available. The Amazon Kinesis Storm Spout helps developers use Amazon Kinesis with Storm, an open source, distributed real-time computation system. This version of the Amazon Kinesis Storm Spout fetches data from the Amazon Kinesis stream and emits it as tuples that Storm topologies can process. Developers can add the Spout to their existing Storm topologies, and leverage Amazon Kinesis as a reliable, scalable, stream capture, storage, and replay service that powers their Storm processing applications.
To get started, see http://aws.amazon.com/kinesis/developer-resources/. You can learn more about Amazon Kinesis here.
|Jan 27, 2014||
AWS CloudFormation now supports Auto Scaling scheduled actions and DynamoDB local and global secondary indexes. AWS CloudFormation is a service that simplifies provisioning and management of a wide range of AWS resources.
You can model your Auto Scaling architecture in a CloudFormation template. The CloudFormation service can automatically create the desired architecture from the template in a fast and consistent manner. With support for scheduled actions, you can now model Auto Scaling schedules in CloudFormation templates. If you have a predictable traffic pattern, you can scale Auto Scaling groups using scheduled actions. We have created a sample template to show you how.
CloudFormation already provided the ability to provision DynamoDB tables. Now, you can also provision DynamoDB tables with local and global secondary indexes using CloudFormation. DynamoDB local and global secondary indexes enable more flexible queries based on a wider range of attributes other than the primary keys. We’ve created a sample template that creates DynamoDB tables with local and global secondary indexes.
To learn more about CloudFormation, please visit the detail page, documentation or watch this introductory video. We have a large collection of sample templates that makes it easy to get started with CloudFormation in minutes.
|Jan 23, 2014||
We're delighted to announce the availability of Dense Compute nodes, a new SSD-based node type for Amazon Redshift. Dense Compute nodes allow customers to create very high performance data warehouses using fast CPUs, large amounts of RAM and SSDs. Customers can get started with a single Dense Compute node for as little as $0.25/hour with no commitments or an effective price of $0.10/hour when using 3 year Reserved Instances.
For customers with less than 500GB of data in their data warehouses, Dense Compute nodes are the most cost-effective and highest performance option. Above 500GB, customers whose primary focus is performance can continue with Dense Compute nodes up to hundreds of terabytes, giving them the highest ratio of CPU, Memory and I/O to storage. If performance isn’t as critical for a customer’s use case, or if customers want to prioritize reducing costs further, they can use the larger Dense Storage nodes and scale up to a petabyte or more of compressed user data for under $1,000/TB/Year (3 Year Reserved Instance pricing). Scaling clusters up and down or switching between node types requires a single API call or a few clicks in the AWS Console.
On-Demand prices for a single Large Dense Compute node start at $0.25/hour in the US East (Northern Virginia) Region and drop to an effective price of $0.10/hour with a three year reserved instance. Dense Compute and Dense Storage nodes for Amazon Redshift are available in the US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions. To get started, please visit the Amazon Redshift detail page.
|Jan 21, 2014||
We are excited to announce the availability of two new Amazon EC2 M3 instance sizes, m3.medium and m3.large. We are also lowering the prices of storage for Amazon S3 and Amazon EBS in all regions, effective February 1st, 2014.
Amazon EC2 M3 instance sizes and features: We have introduced two new sizes for M3 instances: m3.medium and m3.large with 1 and 2 vCPUs respectively. We have also added SSD-based instance storage and support for instance store-backed AMIs (previously known as S3-backed AMIs) for all M3 instance sizes. M3 instances feature high frequency Intel Xeon E5-2670 (Sandy Bridge or Ivy Bridge) processors. When compared to previous generation M1 instances, M3 instances provide higher, more consistent compute performance at a lower price. These new instance sizes are available in all AWS regions, with AWS GovCloud (US) support coming soon. You can launch M3 instances as On-Demand, Reserved or Spot instance. To learn more about M3 instances, please visit the Amazon EC2 Instance Types page.
Amazon S3 storage prices are lowered up to 22%: All Amazon S3 standard storage and Reduced Redundancy Storage (RRS) customers will see a reduction in their storage costs. In the US Standard region, we are lowering S3 standard storage prices up to 22%, with similar price reductions across all other regions. The new lower prices can be found on the Amazon S3 pricing page.
Amazon EBS prices are lowered up to 50%: EBS Standard volume prices are lowered up to 50% for both storage and I/O requests. For example, in the US East region, the price for Standard volumes is now $0.05 per GB-month of provisioned storage and $0.05 per 1 million I/O requests. The new lower prices can be found on the Amazon EBS pricing page.
|Jan 17, 2014||
Amazon Web Services and SUSE® have teamed to offer SUSE Linux Enterprise Server (SLES) on Amazon EC2, a complete, enterprise-class computing environment for running business-critical applications and workloads.
SUSE maintains the base SLES images for Amazon EC2. AWS customers receive updates at the same time that updates are made available from SUSE, so your computing environment remains reliable and secure and your SLES-certified apps maintain their supportability.
Launching a SUSE EC2 instance in the AWS GovCloud (US) Region is quick and easy using the EC2 Console Launch Wizard in the Management Console for the AWS GovCloud (US) Region.
AWS GovCloud (US) is an isolated AWS region designed to allow U.S. government agencies, contractors and customers with regulatory needs to move more sensitive workloads into the cloud. Please join us for our weekly AWS GovCloud (US) Office Hours every Tuesday at 1:00 – 2:00 PM EST and the Intro to AWS GovCloud (US) Region webinar on February 12th, 1:30 – 2:30 PM EST to learn more.
Please contact us to get started in AWS GovCloud (US) today!
|Jan 14, 2014||
We are pleased to announce that AWS Marketplace now supports 1-Click launch for Amazon Virtual Private Cloud (Amazon VPC). Now, you can launch all our AMI-based AWS Marketplace products in a private, logically isolated network that you control. Once you have configured an Amazon VPC, just select the VPC and subnet, and launch with 1-Click. Additionally, for a select set of networking and security products, you will be able to configure multiple subnets and associate Elastic IPs directly from the AWS Marketplace website during the 1-Click launch process. Learn more on Jeff Barr's Blog.
Both of these new capabilities make it even easier to apply security best practices to the Marketplace software you run in the AWS cloud.
|Jan 14, 2014||
AWS Training & Certification offers a new training series called “Introduction to AWS” designed to help you quickly get started using an AWS Service in 30 minutes or less. Start with a short video to learn about key concepts and terminology and watch a step-by-step console demonstration about an AWS Service. Following the video, you can get hands-on practice using that AWS service with a free self-paced training lab on run.qwiklabs.com.
Our first set of videos and labs include the below topics:
To learn more about this and other AWS Training resources that are available to you, visit http://aws.amazon.com/training/intro_series.
|Jan 13, 2014||
We are pleased to announce that starting today Amazon RDS for Oracle allows you to change the system time zone of your RDS Oracle DB Instance. This will allow you to have time compatibility with your on-premise environments or legacy applications.
To modify the time zone of a new or existing RDS Oracle instance, use the “Option Group” option in the AWS Management Console. Please note that this option changes the time zone at the host level and impacts all date columns and values. We recommend that you analyze your data to determine what impact a time zone change will have. Before modifying the time zone in a production DB instance, we recommend you test the change in a test DB Instance.
Learn more by visiting the Oracle section of the Amazon RDS User Guide.
|Jan 13, 2014||
Two new AWS Certification exams, the AWS Certified Developer – Associate Level and the AWS Certified SysOps Administrator – Associate Level, are now available. AWS Certifications designate individuals with IT competence and confidence in working with AWS technology, helping both employers to identify qualified candidates and IT professionals to acquire certifications that validate their expertise for continued career growth. Exams are administered through Kryterion testing centers in more than 750 testing locations worldwide.
To learn more about the AWS Certification Program, visit http://aws.amazon.com/certification.
|Jan 07, 2014||
We are excited to announce the launch of edge locations in Taipei, Taiwan and Rio de Janeiro, Brazil. This is our first edge location in Taiwan and our second edge location in Brazil (joining Sao Paulo). These new locations will improve performance and availability for end users of your applications being served by Amazon CloudFront and Amazon Route 53, and they bring the total number of AWS edge locations to 51 worldwide.
These new edge locations support all Amazon CloudFront functionality, including accelerating your entire website (static, dynamic and interactive content), live and on-demand streaming media, and security features like custom SSL certificates, private content and geo-restriction of content. They also support all Amazon Route 53 functionality including health checks, DNS failover, and latency-based routing.
The pricing for the edge location in Taipei is the same as that in Hong Kong, the Philippines, South Korea and Singapore, and the pricing for the edge location in Rio de Janeiro is the same as that in Sao Paulo. That means your end users in these regions will benefit from the lower latency without any additional costs.
|Jan 02, 2014||
We are pleased to announce that you can now create Auto Scaling groups based on running instances and attach running instances to existing groups. You can also retrieve your limits for Auto Scaling groups and launch configurations.
Create Auto Scaling resources from running instances
You can now:
You can use these features if you are interested in enabling Auto Scaling for your existing applications, and want to do so without having to shut down your instances. You can also use these features to warm up instances ahead of time before bringing them into service.
View limits on Auto Scaling groups and launch configurations
In addition, you can also use the new DescribeAccountLimits action to view your limits for Auto Scaling groups and launch configurations. If you want to raise these limits, submit an Amazon EC2 limit increase request and specify your desired number of Auto Scaling groups or launch configurations in the Use Case Description field.
Specify additional EBS volume and block device mapping settings
These features are available using the AWS SDKs, Auto Scaling APIs, and command-line tools. CloudFormation also supports these new features.