Category: Amazon EC2


Deliver Custom Content With CloudFront

Amazon CloudFront connects with other members of the AWS Family of services to deliver content to end users at high speed and with low latency. In order to get started with CloudFront, you simply create a Distribution, point it at a static or dynamic Origin running on an AWS service such as S3 or EC2 or your custom origin, and make use of the URLs provided to you as part of the Distribution.

Today we are enhancing CloudFront with a new feature that will allow you to customize or personalize the dynamic content that you deliver to your users. You can now use additional characteristics of the end user request, such as their location or the device that they use, to decide what content to return. These characteristics are passed from CloudFront to your origin server in the form of HTTP headers. Any headers added by CloudFront will be prefixed with CloudFront-.

Header Power!
Making additional headers available to your origin server means that your application can now make choices that are more fully informed by the overall context of the request. Here are some examples of what can be done:

  • Mobile Device Detection – You can use the User Agent header to identify and discriminate between desktop and mobile devices, and to provide content that is suitable for and appropriate to each one. CloudFront will also match the header against an internal device list and will send an CloudFront-Is-Mobile-Viewer, CloudFront-Is-Desktop-Viewer, or CloudFront-Is-Tablet-Viewer header to give you a generic indication of the device type.
  • Geo-Targeting – CloudFront will detect the user’s country of origin and pass along the county code to you in the CloudFront-Viewer-Country header. You can use this information to customize your responses without having to use URLs that are specific to each country.
  • Multi-Site Hosting – CloudFront can now be configured to pass the Host header along to your origin server so that you can host multiple web sites and have CloudFront cache responses that are specific to each site.
  • Protocol Detection – You can deliver distinct content to users based on the protocol (HTTP or HTTPS) that they use to access your site. This information is available to your origin server in the CloudFront-Forwarded-Proto header.
  • CORS (Cross Origin Resource Sharing) – CloudFront can now be used to deliver web assets such as JavaScript and fonts to other websites. Because CloudFront can now be configured to pass the Origin header along to the origin server, you can now use CORS to allow cross-origin access to your content.

How it Works
Each of your CloudFront distributions now contains a list of headers that are to be forwarded to the origin server. You have three options:

  • None – This option requests the original behavior.
  • All – This option forwards all headers and effectively disables all caching at the edge.
  • Whitelist – This option give you full control of the headers that are to be forwarded. The list starts out empty, and grows as you add more headers. You can add common HTTP headers by choosing them from a list. You can also add “custom” headers by simply entering the name.

If you choose the Whitelist option, each header that you add to the list becomes part of the cache key for the URLs associated with the distribution. Adding a header to the list simply tells CloudFront that the value of the header can affect the content returned by the origin server.

Let’s say you add Accept-Language to the list of forwarded headers. This has two important effects. First, your origin server will have access to the language, and can return content in that language. Second, the value of the header becomes part of the cache key. In other words, each edge node will be able to cache the content specific to the language or languages in the geographic vicinity of the node.

You should exercise care when adding new headers to the list. Adding too many headers has the potential to reduce the hit rate for the cache in the edge node; this will result in additional traffic to your origin server.

If you are using CloudFront in conjunction with S3, you can now choose to forward the Origin header. If you do this, you can use a CORS policy to share the same content between multiple websites.

Getting Started
You can manage your headers from the CloudFront API or the AWS Management Console.

Here is how you manage your headers from the console:

Jeff;

New SSD-Backed Elastic Block Storage

Amazon Elastic Block Store (EBS for short) lets you create block storage volumes and attach them to EC2 instances. AWS users enjoy the ability to create EBS volumes that range in size from 1 GB up to 1 TB, create snapshot backups, and to create volumes from snapshots with a couple of clicks, with optional encryption at no extra charge.

We launched EBS in the Summer of 2008 and added the Provisioned IOPS (PIOPS) volume type in 2012. As a quick refresher, IOPS are short for Input/Output Operations per Second. A single EBS volume can be provisioned for up to 4,000 IOPS; multiple PIOPS volumes can be connected together via RAID to support up to 48,000 IOPS (see our documentation on EBS RAID Configuration for more information).

Today we are enhancing EBS with the addition of the new General Purpose (SSD) volume type as our default block storage offering. This new volume type was designed to offer balanced price/performance for a wide variety of workloads (small and medium databases, dev and test, and boot volumes, to name a few), and should be your first choice when creating new volumes. These volumes take advantage of the technology stack that we built to support Provisioned IOPS, and are designed to offer 99.999% availability, as are the existing EBS volume types.

General Purpose (SSD) volumes take advantage of the increasing cost-effectiveness of SSD storage to offer customers 10x more IOPS, 1/10th the latency, and more bandwidth and consistent performance than offerings based on magnetic storage. With a simple pricing structure where you only pay for the storage provisioned (no need to provision IOPS or to factor in the cost of I/O operations), the new volumes are priced as low as $0.10/GB-month.

General Purpose (SSD) volumes are designed to provide more than enough performance for a broad set of workloads all at a low cost. They predictably burst up to 3,000 IOPS, and reliably deliver 3 sustained IOPS for every GB of configured storage. In other words, a 10 GB volume will reliably deliver 30 IOPS and a 100 GB volume will reliably deliver 300 IOPS. There are more details on the mechanics of the burst model below, but most applications won’t exceed their burst and actual performance will usually be higher than the baseline. The volumes are designed to deliver the configured level of IOPS performance with 99% consistency.

You can use this new volume type with all of the EBS-Optimized instance types for greater throughput and consistency.

Boot Boost
The new General Purpose (SSD) volumes can enhance the performance and responsiveness of your application in many ways. For example, it has a very measurable impact when booting an operating system on an EC2 instance.

Each newly created SSD-backed volume receives an initial burst allocation that provides up to 3,000 IOPS for 30 minutes. This initial allocation provides for a speedy boot experience for both Linux and Windows, and is more than sufficient for multiple boot cycles, regardless of the operating system that you use on EC2.

Our testing indicates that a typical Linux boot requires about 7,000 I/O operations and a typical Windows boot requires about 70,000. Switching from a Magnetic volume to a General Purpose (SSD) volume of the same size reduces the typical boot time for Windows 2008 R2 by approximately 50%.

If you have been using AWS for a while, you probably know that each EC2 AMI specifies a default EBS volume type, often Magnetic (formerly known as Standard). A different volume type can be specified at instance launch time. The EC2 console makes choosing General Purpose (SSD) volumes in place of the default simple, and you can optionally make this the behavior for all instance launches made from the console.

When you use the console to launch an instance, you have the option to change the default volume type for the boot volume. You can do this for a single launch or for all future launches from the console, as follows (you can also choose to stick with magnetic storage):

If you launch your EC2 instances from the command line, or the EC2 API, you need to specify a different block device mapping in order to use the new volume type. Here’s an example of how to do this from the command line via the AWS CLI:


$ aws ec2 run-instances \
  --key-name mykey \
  --security-groups default \
  --instance-type m3.xlarge \
  --image-id ami-60f69f50 \
  --block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeType":"gp2"}}]' \
  --region us-west-2

To make it easier to get started with General Purpose (SSD) boot volumes when using the command line or EC2 API, versions of the latest Amazon Linux AMI and the Windows Server 2012 R2 Base AMI in English which specify General Purpose (SSD) volumes as the default are now available. To obtain the ID of the latest published General Purpose (SSD) Windows AMI in your region you can use the Get-EC2ImageByName cmdlet as follows:


C:\> Get-EC2ImageByName -Names Windows_Server-2012-R2_RTM-English-64Bit-GP2*

Here are the names and identifiers for the Amazon Linux AMIs:

Region AMI ID Full Name
us-east-1 ami-aaf408c2 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
us-west-2 ami-8f6815bf amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
us-west-1 ami-e48b8ca1 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
eu-west-1 ami-dd925baa amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
ap-southeast-1 ami-82d78bd0 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
ap-southeast-2 ami-91d9bcab amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
ap-northeast-1 ami-df470ede amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2
sa-east-1 ami-09cf6014 amazon/amzn-ami-hvm-2014.03.2.x86_64-gp2

We are also working to make it simpler to configure storage for your instance so that you can easily choose storage options for existing EBS-backed AMIs (stay tuned for an update).

Choosing an EBS Volume Type
With today’s launch, you can now choose from three distinct types of EBS volumes and might be wondering which one is best for each use case. Here are a few thoughts and guidelines:

  • General Purpose (SSD) – The new volume type is a great fit for small and medium databases (either NoSQL or relational), development and test environments, and (as described above) boot volumes. In general, you should now plan to start with this volume type and move to one of the others only if necessary. You can achieve up to 48,000 IOPS by connecting multiple volumes together using RAID.
  • Provisioned IOPS (SSD) – Volumes of this type are ideal for the most demanding I/O intensive, transactional workloads and large relational or NoSQL databases. This volume type provides the most consistent performance and allows you to provision the exact level of performance you need with the most predictable and consistent performance. With this type of volume you provision exactly what you need, and pay for what you provision. Once again, you can achieve up to 48,000 IOPS by connecting multiple volumes together using RAID.
  • Magnetic – Magnetic volumes (formerly known as Standard volumes) provide the lowest cost per Gigabyte of all Amazon EBS volume types and are ideal for workloads where data is accessed less frequently and cost management is a primary objective.

You can always switch from one volume type to another by creating a snapshot of an existing volume and then creating a new volume of the desired type from the snapshot. You can also use migration commands and tools such as tar, dd or Robocopy.

Under the Hood – Performance Burst Details
Each volume can also provide up to 3,000 IOPS in bursts that can span up to 30 minutes, regardless of the volume size. The burst of IOPS turns out to be a great fit for the use cases that I mentioned above. For example, the IOPS load generated by a typical relational database turns out to be very spiky. Database load and table scan operations require a burst of throughput; other operations are best served by a consistent expectation of low latency. The General Purpose (SSD) volumes are able to satisfy all of the requirements in a cost-effective manner. We have analyzed a wide variety of application workloads and carefully engineered General Purpose (SSD) to take advantage of this spiky behavior with the expectation that they will rarely exhaust their accumulated burst of IOPS.

Within the General Purpose (SSD) implementation is a Token Bucket model that works as follows:

  • Each token represents an “I/O credit” that pays for one read or one write.
  • A bucket is associated with each General Purpose (SSD) volume, and can hold up to 5.4 million tokens.
  • Tokens accumulate at a rate of 3 per configured GB per second, up to the capacity of the bucket.
  • Tokens can be spent at up to 3000 per second per volume.
  • The baseline performance of the volume is equal to the rate at which tokens are accumulated — 3 IOPS per GB per second.

All of this work behind the scenes means that you, the AWS customer, can simply create EBS volumes of the desired size, launch your application, and your I/O to the volumes will proceed as rapidly and efficiently as possible.

OpsWorks and CloudFormation Support
You can create volumes of this type as part of an AWS OpsWorks Layer:

You can also create them from within a CloudFormation template as follows:


...
{
   "Type":"AWS::EC2::Volume",
   "Properties" : {
      "AvailabilityZone" : "us-east-1a",
      "Size" : 100,
      "VolumeType" : "gp2"
   }
}
...

Pricing
The new General Purpose (SSD) volumes are priced at $0.10 / GB / month in the US East (Northern Virginia) Region, with no additional charge for I/O operations. For pricing in other AWS Regions, please take a look at the EBS Pricing page.

We are also announcing that we are reducing the price of IOPS for Provisioned IOPS volumes by 35%. For example, if you create a Provisioned IOPS volume and specify 1,000 IOPS your monthly cost will decline from $100 to $65 per month in the US East (Northern Virginia) Region, with similar reductions in the other Regions. The cost for Provisioned Storage remains unchanged at $0.125 / GB / month.

Jeff;

Rapidly Deploy SAP HANA on AWS With New Deployment Guide and Templates

I have written about the SAP HANA database several times in the past year or two. Earlier this year we announced that it was certified for production deployment on AWS and that you can bring your existing SAP HANA licenses into the AWS cloud.

Today we are simplifying and automating the process of getting HANA up and running on AWS. We have published a comprehensive Quick Start Reference Deployment and a set of CloudFormation templates that provide you with a reference implementation, architectural guidance, and a fully automated way to deploy a production-ready instance of HANA in a single or multiple node configuration with a couple of clicks in under an hour, in true self-service form.

 

Reference Architecture

The Reference Deployment document will walk you through all of the steps. You can choose to create a single node environment for non-production use or a multi-node environment that is ready for production workloads.

Here’s the entire multi-node architecture:

The reference implementation was designed to provide you with high performance, and it incorporates best practices for HANA deployment and AWS security. It contains the following AWS components:

  • An Amazon Virtual Private Cloud (VPC) with public and private subnets.
  • A NAT instance in the public subnet for outbound Internet connectivity and inbound SSH access.
  • A Windows Server instance in the public subnet, preloaded with SAP HANA Studio and accessible via Remote Desktop.
  • A Single or multi-node SAP HANA virtual appliance configured according to SAP best practices on a supported OS (SUSE Linux).
  • An AWS Identity and Access Management (IAM) role with fine-grained permissions.
  • An S3 bucket for backups.
  • Preconfigured VPC security groups.

The single and multi-node implementations use EC2’s r3.8xlarge instance types. Each of these instances is packed with 244 GiB of RAM and 32 virtual CPUs, all on the latest Intel Xeon Ivy Bridge processors.

Both implementations include 2.4 TB of EBS storage, configured as a dozen 200 GB volumes connected in RAID 0 fashion. Non-production environments use standard EBS volumes; production environments use Provisioned IOPS volumes, each capable of delivering 2000 IOPS.

When you follow the instructions in the document and get to the point where you are ready to use the CloudFormation template, you will have the opportunity to supply all of the parameters that are needed to create the AWS resources and to set up SAP HANA:

For More Information
To learn more about this and other ways to get started on AWS with popular enterprise applications, visit our AWS Quick Start Reference Deployments page.

Jeff;

Windows Server 2012 R2 and SQL Server 2014 AMIs Now Available

If you have been reading this blog for any length of time, you probably know that I sometimes talk about the number of options and choices that you have at your fingertips when you use AWS. You can choose from a broad array of cloud services, run your code on a wide variety of Linux distributions and Windows versions on the instance type that best fits your needs.

In keeping with our tradition, I am happy to announce that we are now making a set of Windows Server 2012 R2 AMIs (Amazon Machine Images) available for use on EC2. These AMIs are available in 19 languages and include a set of PV (paravirtualized) drivers that have been certified by Microsoft for Windows Server 2012 R2. The AMIs automatically make use of EC2’s Enhanced Networking for higher I/O performance, lower inter-instance latency, and lower CPU utilization when run on R3, C3, and I2 instances. The AMIs are available in several flavors including Server Core, a low maintenance, minimal server installation.

This version of Windows also includes many new features; here are just a few that are relevant when running in the cloud on EC2:

  • Storage Tiering – This feature lets you dynamically move chunks of data between different classes of storage, such as fast SSD and slower hard drives. You can create a single Virtual Disk that spans both classes of storage and have Windows Server 2012 R2 keep the most frequently accessed data blocks on the SSDs and the less frequently accessed blocks on the hard drives, transparently and behind the scenes.
  • Write-Back Cache – This feature is an adjunct to Storage Tiering. If you create a fast tier that is 1 GB or larger, Windows will use 1 GB as a write-back cache. This cache buffers rapid sequences of writes that are destined for the underlying hard drive.
  • Desired State Configuration – This PowerShell extension (also known as DSC) lets you establish (programmatically) a desired set of roles and features, and then monitor, detect, and update any system that is not in the desired state.

for more information on the full Windows Server 2012 R2 feature set, read the Technet Article, What’s New in Windows Server 2012 R2.

if you have existing VMs running Windows Server 2012 R2, you can now import them using VM Import. As part of the import process, the VM Import service will install the latest drivers for Windows Server 2012 R2 and the EC2 Config service. Your imported Windows Server 2012 R2 VM will also take advantage enhanced networking when you use an instance type that supports this capability.

 

SQL Server 2014 AMIs
As part of this release, new Microsoft SQL Server 2014 Standard, Web and Express Edition AMIs running on Windows Server 2012 R2 are available in localized versions for English, Japanese and Brazilian Portuguese.

PowerShell Updates
We have also updated the AWS Tools for PowerShell. The new Get-EC2ImageByName cmdlet can now be used to obtain the ID of the latest published version of a Windows AMI:

Quick Start Reference Deployments
We have launched Quick Start Reference Deployments to help you to deploy a pair of popular Microsoft products on AWS with just a few clicks. Each reference deployment includes a comprehensive reference guide and a CloudFormation template. Here’s what we have for you:

The Microsoft Active Directory Domain Services template can be deployed in less than an hour, with an AWS infrastructure cost of less than $3.00 per hour. Read the Reference Deployment Guide or Launch the Quick Start. You will be prompted for the information needed to launch Active Directory:

The Remote Desktop Gateway creates all of the necessary AWS infrastructure in less than an hour, with a per-hour cost of less than $2.00 per hour once deployed. Read the Reference Deployment Guide or Launch the Quick Start. Again, you will be prompted for all necessary information. Here’s an excerpt from the CloudFormation template:


e-configure-rdgw"       : {
    "command" : {
        "Fn::Join" : [
            "",
            [
                "powershell.exe -ExecutionPolicy RemoteSigned",
                " C:\\cfn\\Configure-RDGW.ps1 -ServerFQDN ",
                {
                    "Ref" : "RDGWNetBIOSName1"
                },
                ".",
                {
                    "Ref" : "DomainDNSName"
                },
                " -DomainNetBiosName BUILTIN -GroupName administrators -UserName ",
                {
                    "Ref" : "AdminUser"
                }
            ]
        ]
    }
}

Available Now
All of these new AMIs and features are available now and you can start using them today!

Jeff;

New AWS Management Portal for vCenter

IT Managers and Administrators working within large organizations regularly tell us that they find the key AWS messages — fast and easy self-service provisioning, exchange of CAPEX for OPEX, and the potential for cost savings — to be attractive and compelling. They want to start moving into the future by experimenting with AWS, but they don’t always have the time to learn a new set of tools and concepts.

In order to make AWS more accessible to this very important audience, we are launching the new AWS Management Portal for vCenter today!

If you are already using VMware vCenter to manage your virtualized environment, you will be comfortable in this new environment right away, even if you are new to AWS, starting with the integrated sign-on process, which is integrated with your existing Active Directory.

The look-and-feel and the workflow that you use to create new AWS resources will be familiar and you will be launching EC2 instances before too long. You can even import your existing “golden” VMware images to EC2 through the portal (this feature makes use of VM Import).

I believe that IT Managers will find this blend of centralized control and cloud power to be a potent mix. vCenter Administrators can exercise full control over hybrid IT environments (both on-premises and EC2 instances) using a single UI. They have full control over cloud-based resources, and can dole out permissions to users on a per-environment basis, all coupled with single sign-on to existing Active Directory environments.

 

Visual Tour
Let’s take a tour of the AWS Management Portal for vCenter, starting with the main screen. As you can see, there’s an AWS Management Portal icon in the Inventory section:

The portal displays all of the public AWS Regions in tree form:

Administrative users have the power to control which environments are visible to each non-administrative user. For example, this user can see nothing more than the Dev/Test environment in the US West (Northern California) Region:

This user has access to the Prod environment in that Region, and to additional environments in other Regions:

Permissions are managed from within the Portal:

Each Region can be expanded in order to display the vSphere environments, templates, and the EC2 instances within the Region:

You can right-click on an environment to delete or modify it, create new templates, or add permissions:

You can create a template and then use it to launch any number of EC2 instances, all configured in the same way. You can create templates for your users and lock them down for governance and management purposes.

You start by naming the template and choosing an AMI (Amazon Machine Image):

Then you select the instance type and the allowable network subnets. EC2 has a wide variety of instance types. You can choose the number of vCPUs, the amount of RAM, local disk storage, and so forth). There are also compute-optimized, memory-optimized, and storage-optimized instances. The network subnets are a feature of the Amazon Virtual Private Cloud and provide you with full control over your network topology.

Next, you can choose to provision Elastic Block Store (EBS) volumes as part of the template. The volumes can range in size from 1 GB to 1 TB, and will be created and attached to the instance each time the template is used:

You can also choose the security groups (firewall rules) that control traffic to and from the instances:

Finally, you choose the key pair that will be used for SSH access to the instance. You can also configure an instance without a key pair.

You can right-click on a template to copy it, deploy instances, or to delete it:

When you deploy an instance, you can use EC2’s tagging feature to attach one or more key/value pairs to the instance:

You can also choose the subnet for the instance:

The instance will be launched after you review your choices:

You can also manage your VPC subnets and security groups:


As I mentioned earlier, you can import an existing virtual machine into EC2 with a couple of clicks:

Getting Started
You can download the AWS Management Portal for vCenter today and install it into your existing vSphere Client.

Jeff;

New EBS Encryption for Additional Data Protection

We take data protection very seriously! Over the years we have added a number of security and encryption features to various parts of AWS. We protect data at rest with Server Side Encryption for Amazon S3 and Amazon Glacier, multiple tiers of encryption for Amazon Redshift, and Transparent Data Encryption for Oracle and SQL Server databases via Amazon RDS. We protect data in motion with extensive support for SSL/TLS in CloudFront, Amazon RDS, and Elastic Load Balancing.

Today we are giving you yet another option, with support for encryption of EBS data volumes and the associated snapshots. You can now encrypt data stored on an EBS volume at rest and in motion by setting a single option. When you create an encrypted EBS volume and attach it to a supported instance type, data on the volume, disk I/O, and snapshots created from the volume are all encrypted. The encryption occurs on the servers that host the EC2 instances, providing encryption of data as it moves between EC2 instances and EBS storage.

Enabling Encryption
You can enable EBS encryption when you create a new volume:

You can see the encryption state of each of your volumes from the console:

Important Details
Adding encryption to a provisioned IOPS (PIOPS) volume will not affect the provisioned performance. Encryption has a minimal effect on I/O latency.

The snapshots that you take of an encrypted EBS volume are also encrypted and can be moved between AWS Regions as needed. You cannot share encrypted snapshots with other AWS accounts and you cannot make them public.

As I mentioned earlier, your data is encrypted before it leaves the EC2 instance. In order to be able to do this efficiently and with low latency, the EBS encryption feature is only available on EC2’s M3, C3, R3, CR1, G2, and I2 instances. You cannot attach an encrypted EBS volume to other instance types.

Also, you cannot enable encryption for an existing EBS volume. Instead, you must create a new, encrypted volume and copy the data from the old one to the new one using the file manipulation tool of your choice. Rsync (Linux) and Robocopy (Windows) are two good options, but there are many others.

Each newly created volume gets a unique 256-bit AES key; volumes created from encrypted snapshots share the key. You do not need to manage the encryption keys because they are protected by our own key management infrastructure, which implements strong logical and physical security controls to prevent unauthorized access. Your data and associated keys are encrypted using the industry-standard AES-256 algorithm.

Encrypt Now
EBS encryption is available now in all eight of the commercial AWS Regions and you can start using it today! There is no charge for encryption and it does not affect the published EBS Service Level Agreement (SLA) for availability.

Jeff;

EC2 Expansion – G2 and C3 Instances in Additional Regions

I’ll be brief! We are making two types of Amazon EC2 instances available in even more AWS Regions.

G2 Expansion
EC2’s g2 instances are designed for applications that require 3D graphics capabilities. Each instance includes an NVIDIA GRID™ GPU with 1,536 parallel processing cores and 4 Gigabytes of RAM and hardware-powered video encoding. The g2.2xlarge instances also use high frequency Intel Xeon E5-2670 processors, and include 15 GiB of RAM, and 60 GB of SSD-based storage.

Today we are making the G2 instances available in the Asia Pacific (Sydney) and Asia Pacific (Singapore) Regions and you can start using them now.

C3 Expansion
EC2’s c3 instances are designed for CPU-bound scale out applications and compute-intensive HPC work. They are available in five sizes (c3.large, c3.xlarge, c3.2xlarge, c3.4xlarge, and c3.8xlarge), all with Intel Xeon E5-2680 v2 processors, 3.75 to 60 GiB of RAM, and 32 to 640 GB of SSD-based storage.

Today we are making all five sizes of c3 instances available in the South America (São Paulo) Region. C3 instances are currently available in a single Availability Zone in this Region. To launch C3 instances in this Region, we recommend you to not specify an Availability Zone preference during the instance launch process.

Jeff;

Success on AWS: Broadcast Interactive Media + Wowza Streaming Engine

I would like to share an exciting AWS customer success story with you today, courtesy of AWS Partner Wowza Media Systems and their customer, Broadcast Interactive Media. This is the first of what I hope will develop into a series of guest posts authored by AWS partners and customers. If you are interested in contributing a post of your own, please contact me at awseditor@amazon.com.

About Wowza
Video is everywhere and viewers expect a high quality TV-like video experiences, wherever they are and whatever device they use to access video. Wowza Streaming Engine is robust, customizable, and scalable server software that powers reliable streaming of high-quality video and audio to any device, anywhere. The software runs on Amazon EC2 so that organizations can take full advantage of both the servers’ extensive streaming capabilities as well as the flexibility and scale of AWS.

Wowza Streaming Engine on Amazon EC2 is ideal for streaming of live events, concerts, church services, webinars, and company meetings. It is also an excellent choice for adding overflow capacity to an organizations dedicated Wowza deployments or for cost-constrained start-ups that need the flexibility of starting small but need the available capacity to grow their businesses cost-effectively over time.

About BIM
As the leading provider of revenue and technical solutions for local media, Broadcast Interactive Media (BIM) is a trusted resource for hundreds of broadcast and publisher clients such as ABC, CBS Television, Fox Television, Hearst-Argyle, NBC Owned Television Stations, Telemundo Station Group and many more.

BIM products include BIMvid – a video platform for broadcasters, events and publishers. BIMvid is a custom, highly scalable live streaming application that provides easy-to-use tools for organizations to manage their live streams and on-demand video. BIMvid uses Wowza Streaming Engine on AWS to provide almost instantaneous scaling for optimal viewer experiences around the world and on any device. They leverage multiple load-balanced Wowza Streaming Engine servers in each AWS region and utilize a dynamic ingest solution for incoming streams that selects the closest Wowza Streaming Engine based on the user’s location and server load.

Here is how the pieces fit together:

In addition to leveraging Wowza Streaming Engine on AWS, BIMvids video management system is also built on AWS using the Symfony PHP framework and MongoDB as their database system. This management system provides customers with the ability to schedule and record live streams for VOD playback and automatic publishing.

Getting Started
You can launch the Wowza Media Server from the AWS Marketplace and be up and running in minutes.

If you have an upcoming event that you would like to stream, take a look at the the Free trial of BIMvid.

Jeff;

The New AWS TCO (Total Cost of Ownership) Calculator

Our customers often find multiple benefits from moving to the cloud, including agility, cost savings, and flexibility. Whether you run on AWS or own and operate your own infrastructure, there are many contributors to the overall cost. Weighing the financial considerations of owning and operating a data center (or renting a colocation facility) versus using a cloud infrastructure requires detailed and careful analysis.

Customers tell us that it can be challenging for them to perform accurate apples-to-apples cost comparisons between on-premises infrastructure and cloud. In reality, it is not as simple as just comparing on-premises hardware costs to the pay-as-you-go pricing model of compute, storage, and bandwidth in the cloud. The problem is further complicated by the fact that calculating the true, complete cost of owning and operating your own on-premises infrastructure is not trivial.

To make it easier for you to compare these costs, we are announcing the new AWS TCO calculator. This tool should help customers with a base level of familiarity with infrastructure to generate a fact-based apples-to-apples TCO comparison between on-premises infrastructure and AWS.

The calculator is simple to use and provides reasonable estimate of costs for on-premises/colocation infrastructure and equivalent AWS services based on the information you provide. It also provides a comprehensive & detailed cost breakdown report (which you can download or store in Amazon S3 for sharing with others) and an FAQ that explains the assumptions and the methodology behind the calculations.

The tool automates the task of selecting the right AWS instance type based on the information you provide. As you can see below, you can describe your physical or virtual infrastructure in detail and the tool will provide the equivalent AWS instance types that meet your requirements. This will eliminate much of the guesswork associated with choosing the right AWS instance types.

For customers interested in drilling-down into individual cost items, the tool provides a detailed line by line comparison, backed by data points from third-party analyst and industry research, and analysis of data from hundreds of AWS customers.


As always, we are looking for feedback, suggestions and comments and hope to roll out more use cases and improve it over time. Try out the new AWS TCO Calculator today and let us know what you think. You can contact the team behind the tool via email at tcosupport@amazon.com.

Jeff;

Amazon WorkSpaces Now Available in Europe

Amazon WorkSpaces provides a desktop computing environment in the cloud. It gives enterprise IT the power to meet the needs of a diverse user base by providing them with the ability to work wherever and whenever they want, while using the desktop or mobile device of their choice.

Today we are bringing Amazon WorkSpaces to Europe, with immediate availability in the EU (Ireland) Region. This new Region joins the existing US East (Northern Virginia) and US West (Oregon) Regions.

The Amazon WorkSpaces Administration Guide contains the information that you need to have to get started as quickly and efficiently as possible. Within this guide you’ll find a Getting Started Guide, documentation for WorkSpaces administrators, and WorkSpaces Client Help.

Jeff;