Category: Amazon EC2


Lots of SAP News to Start the Week

Last week one of my colleagues stopped me in the hall and said “Jeff, you have to tell our customers about all of the great work that we are doing with SAP.” I asked him for some details and he was happy to oblige.

SAP Business All-in-One is an important piece of enterprise software, a package that is mission critical for many companies. It includes Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Supplier Relationship Management (SRM), and Business Intelligence (BI) functions.

The first big piece of news is that we are expanding the range of SAP certified solutions on AWS. This includes:

  • SAP Business All-in-One on Linux.
  • SAP Business All-in-One on Microsoft Windows.
  • Expanded certification for SAP Rapid Deployment solutions on Windows Server 2008 R2.
  • Expanded certification for SAP Business Objects on Windows Server 2008 R2.

Second, AWS partner VMS (a German management consultant staffed by a number of ex-SAP executives), has published a new SAP TCO analysis. Their research shows that running SAP on AWS can results in an infrastructure cost savings of up to 69%, when compared to on-premises or colo-based hosting. You can read the executive summary to learn more.

VMS computed a CWI (Cloud Worthiness Index) value of 59 for SAP running on AWS. The CWI was designed to quantify the economic value of the cloud, and is based on VMS’s measurements of over 2,600 SAP systems. It accounts for TCO and best practices, both with and without the cloud. You can read more about the CWI here.

 

Third, we have announced a number of other SAP offerings on AWS:

  • You can now process Big Data on SAP on AWS using their new HANA in-memory database. SAP has published a comprehensive getting started guide and they are also offering a 30-day free trial for testing and evaluation.
  • SAP Afaria makes it easy to build and deploy mobile applications that connect mobile workers to busines data. Afaria handles a number of important aspects of this including password and certificate management, an application portal, and a management console. You can launch Afaria from the AWS Marketplace (register for the 14-day free trial if you don’t have a license).

You can find case studies and technical papers on our SAP microsite.

— Jeff;

VM Export Service For Amazon EC2

The AWS VM Import service gives you the ability to import virtual machines in a variety of formats into Amazon EC2, allowing you to easily migrate from your on-premises virtualization infrastructure to the AWS Cloud. Today we are adding the next element to this service. You now have the ability to export previously imported EC2 instances back to your on-premises environment.

You can initiate and manage the export with the latest version of the EC2 command line (API) tools. Download and install the tools, and then export the instance of your choice like this:

ec2-create-instance-export-task e vmware -b NAME-OF-S3-BUCKET INSTANCE-ID

Note that you need to specify the Instance ID and the name of an S3 bucket to store the exported virtual machine image.

You can monitor the export process using ec2-describe-export-tasks and you can cancel unfinished tasks using ec2-cancel-export-task.

Once the export task has completed you need only download the exported image to your local environment.

The service can export Windows Server 2003 (R2) and Windows Server 2008 EC2 instances to VMware ESX-compatible VMDK, Microsoft Hyper-V VHD or Citrix Xen VHD images. We plan to support additional operating systems, image formats and virtualization platforms in the future.

Let us know what you think, and what other features, platforms and operating systems you would like us to support.

— Jeff;

Elastic Load Balancer – Console Updates and IPv6 Support for 2 Additional Regions

You can now manage the listeners, SSL certificates, and SSL ciphers for an existing Elastic Load Balancer from within the AWS Management Console. This enhancement makes it even easier to get started with Elastic Load Balancing and simpler to maintain a highly available application using Elastic Load Balancing. While this functionality has been available via the API and command line tools, many customers told us that it was critical to be able to use the AWS Console to manage these settings on an existing load balancer.

With this update, you can add a new listener with a front-end protocol/port and back-end protocol/port:

If the listener uses encryption (HTTPS or SSL listeners), then you can create or select the SSL certificate:

In addition to selecting or creating the certificate, you can now update the SSL protocols and ciphers presented to clients:

We have also expanded IPv6 support for Elastic Load Balancing to include the US West (Northern California) and US West (Oregon) regions.

— Jeff;

 

Amazon CloudFront – Support for Dynamic Content

Introduction
Amazon CloudFront‘s network of edge locations (currently 30, with more in the works) gives you the ability to distribute static and streaming content to your users at high speed with low latency.

Today we are introducing a set of features that, taken together, allow you to use CloudFront to serve dynamic, personalized content more quickly.

What is Dynamic Personalized Content?
As you know, content on the web is identified by a URL, or Uniform Resource Locator such as https://media.amazonwebservices.com/blog/console_cw_est_charge_service_2.png . A URL like this always identifies a unique piece of content.

A URL can also contain a query string. This takes the form of a question mark  (“?”) and additional information that the server can use to personalize the request. Suppose that we had a server at www.example.com, and that can return information about a particular user by invoking a PHP script that accepts a user name as an argument, with URLs like http://www.example.com/userinfo.php?jeff or http://www.example.com/userinfo.php?tina.

Up until now, CloudFront did not use the query string as part of the key that it uses to identify the data that it stores in its edge locations.

We’re changing that today, and you can now use CloudFront to speed access to your dynamic data at our current low rates, making your applications faster and more responsive, regardless of where your users are located.

With this change (and the others that I’ll tell you about in a minute), Amazon CloudFront will become an even better component of your global applications. We’ve put together a long list of optimizations that will each increase the performance of your application on their own, but will work even better when you use them in conjunction with other AWS services such as Route 53, Amazon S3, and Amazon EC2.

Tell Me More
Ok, so here’s what we’ve done:

Persistent TCP Connections – Establishing a TCP connection takes some time because each new connection requires a three-way handshake between the server and the client. Amazon CloudFront makes use of persistent connections to each origin for dynamic content. This obviates the connection setup time that would otherwise slow down each request. Reusing these “long-haul” connections back to the server can eliminate hundreds of milliseconds of connection setup time. The connection from the client to the CloudFront edge location is also kept open whenever possible.

Support for Multiple Origins – You can now reference multiple origins (sources of content) from a single CloudFront distribution. This means that you could, for example, serve images from Amazon S3, dynamic content from EC2, and other content from third-party sites, all from a single domain name. Being able to serve your entire site from a single domain will simplify implementation, allow the use of more relative URLs within the application, and can even get you past some cross-site scripting limitations.

Support for Query Strings – CloudFront now uses the query string as part of its cache key. This optional feature gives you the ability to cache content at the edge that is specific to a particular user, city (e.g. weather or traffic), and so forth. You can enable query string support for your entire website or for selected portions, as needed.

Variable Time-To-Live (TTL) – In many cases, dynamic content is either not cacheable or cacheable for a very short period of time, perhaps just a few seconds. In the past, CloudFront’s minimum TTL was 60 minutes since all content was considered static. The new minimum TTL value is 0 seconds. If you set the TTL for a particular origin to 0, CloudFront will still cache the content from that origin. It will then make a GET request with an If-Modified-Since header, thereby giving the origin a chance to signal that CloudFront can continue to use the cached content if it hasn’t changed at the origin.

Large TCP Window – We increased the initial size of CloudFront’s TCP window to 10 back in February, but we didn’t say anything at the time. This enhancement allows more data to be “in flight” across the wire at a given time, without the usual waiting time as the window grows from the older value of 2.

API and Management Console Support – All of the features listed above are accessible from the CloudFront APIs and the CloudFront tab of the AWS Management Console. You can now use URL patterns to exercise fine-grained control over the caching and delivery rules for different parts of your site.

Of course, all of CloudFront’s existing static content delivery features will continue to work as expected. GET and HEAD requests, default root object, invalidation, private content, access logs, IAM integration, and delivery of objects compressed by the origin.

Working Together
Let’s take a look at the ways that various AWS services work together to make delivery of static and dynamic content as fast, reliable, and efficient and possible (click on the diagram at right for an even better illustration):

  • From Application / Client to CloudFront – CloudFronts request routing technology ensures that each client is connected to the nearest edge location as determined by latency measurements that CloudFront continuously takes from internet users around the world. Route 53 may be optionally used as a DNS service to create a CNAME from your custom domain name to your CloudFront distribution. Persistent connections expedite data transfer.
  • Within the CloudFront Edge Locations – Multiple levels of caching at each edge location speed access to the most frequently viewed content and reduce the need to go to your origin servers for cacheable content.
  • From Edge Location to Origin – The nature of dynamic content requires repeated back and forth calls to the origin server. CloudFront edge locations collapse multiple concurrent requests for the same object into a single request. They also maintain persistent connections to the origins (with the large window size). Connections to other parts of AWS are made over high-quality networks that are monitored by Amazon for both availability and performance. This monitoring has the beneficial side effect of keeping error rates low and window sizes high.
 

Cache Behaviors
In order to give you full control over query string support, TTL values, and origins you can now associate a set of Cache Behaviors with each of your CloudFront distributions. Each behavior includes the following elements:

  • Path Pattern – A pattern (e.g. “*.jpg”) that identifies the content subject to this behavior.
  • Origin Identifier -The identifier for the origin where CloudFront should forward user requests that match this path pattern.
  • Query String – A flag to enable support for query string processing for URLs that match the path pattern.
  • Trusted Signers – Information to enable other AWS accounts to create signed URLs for this URL path pattern.
  • Protocol Policy – Either allow-all or https-only, also applied only to this path pattern.
  • MinTTL – The minimum time-to-live for content subject to this behavior.

Tool Support
Andy from CloudBerry Lab sent me a note to let me know that they have added dynamic content support to the newest free version of the CloudBerry Explorer for Amazon S3.  In Andy’s words:

I’d like to let you know that CloudBerry Explorer is ready to support new CloudFront features by the time of release.  We have added the ability to manage multiple origins for a distribution, configure cache behavior for each origin based on URL path patterns and configure CloudFront to include query string parameters.

You can read more about this in their new blog post, How to configure CloudFront Dynamic Content with CloudBerry S3 Explorer .

Andy also sent some screen shots to show us how it works. The first step is to specify the Origins and CNAMEs associated with the distribution:

The next step is to specify the Path Patterns:

With the Origins and Path Patterns established, the final step is to configure the Path Patterns:

Update: Tej from Bucket Explorer wrote in to tell me that they are now supporting this feature:

Hi, I am one of the developer of Bucket Explorer. I am excited to announce that Bucket Explorer new version is supporting CloudFront Dynamic Content feature. Try Its 30 day trial version with full featured. Dynamic Content (Steps and Images).

And Here You Go
Together with CloudFront’s cost-effectiveness (no minimum commits or long-term contracts), these features add up to a content distribution system that is fast, powerful, and easy to use.

So, what do you think? What kinds of applications can you build with these powerful new features?

— Jeff;

PS – Read more about this new feature in Werner’s new post: Dynamic Content Support in Amazon CloudFront.

Monitor Estimated Charges Using Billing Alerts

Introduction
Because the AWS Cloud operates on a pay-as-you-go model, your monthly bill will reflect your actual usage. In situations where your overall consumption can vary from hour to hour, it is always a good idea to log in to the AWS portal and check your account activity on a regular basis. We want to make this process easier and simpler because we know that you have more important things to do.

To this end, you can now monitor your estimated AWS charges with our new billing alerts, which use Amazon CloudWatch metrics and alarms.

What’s Up?
We regularly estimate the total monthly charge for each AWS service that you use. When you enable monitoring for your account, we begin storing the estimates as CloudWatch metrics, where they’ll remain available for the usual 14 day period. The following variants on the billing metrics are stored in CloudWatch:

  • Estimated Charges: Total
  • Estimated Charges: By Service
  • Estimated Charges: By Linked Account (if you are using Consolidated Billing)
  • Estimated Charges: By Linked Account and Service (if you are using Consolidated Billing)

You can use this data to receive billing alerts (which are simply Amazon SNS notifications triggered by CloudWatch alarms) to the email address of your choice. Since the notifications use SNS, so you can also route them to your own applications for further processing.

It is important to note that these are estimates, not predictions. The estimate approximates the cost of your AWS usage to date within the current billing cycle and will increase as you continue to consume resources. It includes usage charges for things like Amazon EC2 instance-hours and recurring fees for things like AWS Premium Support. It does not take trends or potential changes in your AWS usage pattern into account.

So, what can you do with this? You can start by using the billing alerts to let you know when your AWS bill will be higher than expected. For example, you can set up an alert to make sure that your AWS usage remains within the Free Usage Tier or to find out when you are approaching a budget limit. This is a very obvious and straightforward use case, and I’m sure it will be the most common way to use this feature at first. However, I’m confident that our community will come up with some more creative and more dynamic applications.

Here are some ideas to get you started:

  • Relate the billing metrics to business metrics such as customer count, customer acquisition cost, or advertising spending (all of which you could also store in CloudWatch, as custom metrics) and use them to track the relationship between customer activity and resource consumption. You could (and probably should) know exactly how much you are spending on cloud resources per customer per month.
  • Update your alerts dynamically when you change configurations to add or remove cloud resources. You can use the alerts to make sure that a regression or a new feature hasn’t adversely affected your operational costs.
  • Establish and monitor ratios between service costs. You can establish a baseline set of costs, and set alarms on the total charges and on the individual services. Perhaps you know that your processing (EC2) cost is generally 1.5x your database (RDS) cost, which in turn is roughly equal to your storage (S3) cost. Once you have established the baselines, you can easily detect changes that could indicate a change in the way that your system is being used (perhaps your newer users are storing, on average, more data than than the original ones).

Enabling and Setting a Billing Alert
To get started, visit your AWS Account Activity page and enable monitoring of your AWS charges. Once you’ve done that, you can set your first billing alert on your total AWS charges. Minutes later (as soon as the data starts to flow in to CloudWatch) you’ll be able to set alerts for charges related to any of the AWS products that you use.

We’ve streamlined the process to make setting up billing alerts as easy and quick as possible. You don’t need to be familiar with CloudWatch alarms; juts fill out this simple form, which you can access from the Account Activity Page:


(click for full-sized image)

You’ll receive a subscription notification email from Amazon SNS; be sure to confirm it by clicking the included link to make sure you receive your alerts. You can then access your alarms from the Account Activity page or the CloudWatch dashboard in the AWS Management Console.

Going Further
If you have already used CloudWatch, you are probably already thinking about some even more advanced ways to use this new information. Here are a few ideas to get you started:

  • Publish the alerts to an SNS queue, and use them to recalculate your business metrics, possibly altering your Auto Scaling parameters as a result. You’d probably use the CloudWatch APIs to retrieve the billing estimates and to set new alarms.
  • Use two separate AWS accounts to run two separate versions of your application, with dynamic A/B testing based on cost and ROI.

I’m sure that your ideas are even better than mine. Feel free to post them, or (better yet), implement them!

— Jeff;

Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk

We are continuing to simplify the Windows development experience on AWS, and today we are excited to announce Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk. Amazon RDS takes care of the tedious aspects of deploying, scaling, patching, and backing up of a relational database, freeing you from time-consuming database administration tasks. AWS Elastic Beanstalk is an easy way to deploy and manage applications in the AWS cloud and handles the deployment details of capacity provisioning, load balancing, auto scaling, and application health monitoring.

Here are the details…

Amazon RDS for SQL Server
We launched the Amazon Relational Database Service (RDS) in late 2009 with support for MySQL. Since then, we have added a number of features, including Multi-AZ, read replicas, and VPC support. Last year, we added support for Oracle Database.

Today we are extending the manageability benefits of Amazon RDS to SQL Server customers. Amazon RDS now supports Express, Web, Standard, and Enterprise Editions of SQL Server 2008 R2. We plan to add support for SQL Server 2012 later this year.

If you are a new Amazon RDS customer, you can use Amazon RDS for SQL Server (Express Edition) under the free usage tier for a full year. After that, you can use the service under multiple licensing models, with prices starting as low as $0.035/hour. Refer to Amazon RDS for SQL Server pricing for more details.

.NET Support for AWS Elastic Beanstalk
Earlier this year, we added support for PHP applications to Elastic Beanstalk alongside the existing support for Java applications.

Today, we are extending Elastic Beanstalk to our Windows developers who are building .NET applications. Elastic Beanstalk leverages the Windows Server 2008 R2 AMI and IIS 7.5 to run .NET applications. You can run existing applications on AWS with minimal changes. There is no additional charge for Elastic Beanstalkyou pay only for the AWS resources needed to store and run your applications. And if you are eligible for the AWS free usage tier, you can deploy and run your application on Elastic Beanstalk for free.

AWS Toolkit for Visual Studio Enhancements
We are also updating the AWS Toolkit for Visual Studio so you can deploy your existing web application projects to AWS Elastic Beanstalk. You can also use the AWS Toolkit for Visual Studio to create Amazon RDS DB Instances and connect to them directly, so you can focus on building your applications without leaving your development environment.

Lets look at how it all works. For a detailed step-by-step walkthrough, visit the AWS Elastic Beanstalk Developer Guide.

Deploy Your Application to AWS Elastic Beanstalk
To get started, simply install the AWS Toolkit for Visual Studio and make sure you have signed up for an AWS account. You can deploy any Visual Studio Web project to AWS Elastic Beanstalk, including ASP.NET MVC projects and ASP.NET Web Forms. As an example, I will use the NerdDinner MVC sample application.


(click for full-sized image)

To deploy to AWS Elastic Beanstalk, right-click the project, and then click Publish to AWS. Provide the details and complete the wizard. This will launch a new Elastic Beanstalk environment and create the AWS resources to run your application. Thats it; NerdDinner is now running on Elastic Beanstalk.

Create and Connect to an Amazon RDS Database Instance
By default, NerdDinner connects to a local SQL Server Express database, so well need to make a few changes to connect it to an Amazon RDS for SQL Server instance. Lets start by creating a new Amazon RDS for SQL Server instance using the AWS Explorer view inside Visual Studio.


(click for full-sized image)

We will also need to create the schema that NerdDinner expects. To do so, simply use the Publish to Provider wizard in Visual Studio to export the schema and data to a SQL script. You can then run the SQL script against the RDS for SQL Server database to recreate the schema and data.


(click for full-sized image)

Update Your Running Application
Now that the Amazon RDS for SQL Server database is set up, lets modify the applications connection string to use it. To do so, you simply modify the ConnectionString.config file in your NerdDinner project and provide the connection details of your RDS instance.


(click for full-sized image)

Finally, you will republish these changes to AWS Elastic Beanstalk. Using incremental deployments, the AWS Toolkit for Visual Studio will only upload the modified file and RDS-backed NerdDinner becomes available a few seconds later.


(click for full-sized image)

I hope that you enjoy these new AWS features!

— Jeff (with lots of help from Saad Ladki of the Elastic Beanstalk team);

Updated Microsoft SQL Server Offerings

As you might know, you can use official Windows AMIs (Amazon Machine Images) to launch Amazon EC2 instances with Microsoft SQL Server, inside or outside a VPC (Virtual Private Cloud).

Many customers are taking advantage of this possibility to run different types of workloads on the AWS Cloud. After listening to customer feedback (as we always like to do) and feature requests, today we’re happy to announce some updates to our Microsoft SQL Server offerings. Here they are.

  1. Support for Additional Instance Types
    You can now launch Microsoft SQL Server on m1.small (1 ECU, 1.7 GB RAM) and m1.medium (2 ECU, 3.75 GB RAM) instance types. Since we have several instance types, you might also want to take a look at the details.
  2. Support for Microsoft SQL Server Web Edition
    For customers who run web-facing workloads with Microsoft SQL Server software, we are introducing support for Microsoft SQL Server Web Edition, which brings together affordability, scalability, and manageability in a single offering. SQL Server Web will be supported across all Amazon EC2 instance types, all AWS regions, and On-Demand and Reserved Instance offerings.
  3. Support for Microsoft SQL Server 2012
    Last, but definitively not least, we now support Microsoft SQL Server 2012 on Amazon EC2.
    Customers now have immediate access to Amazon published (official) AMIs for:

    SQL Server 2012 Express (AMI catalog entry)
    SQL Server 2012 Web Edition (AMI catalog entry)
    SQL Server 2012 Standard Edition (AMI catalog entry)

    You can use Microsoft’s SQL Server Comparison Chart to learn more about the features available to you in each edition.

You can locate the new SQL Server 2012 AMIs by searching for the string “SQL_2012” (don’t forget the underscore) in the AMI list within the AWS Management Console:

Let us know how you plan to take advantage of these new features!


Simone (@simon)

The AWS Marketplace – Find, Buy, Compare, and Launch Cloud Software

Using the new AWS Marketplace, you can easily find, compare, and start using an array of software systems and products. We’ve streamlined the discovery, deployment, and billing steps to make the entire process of finding and buying software quick, painless, and worthwhile for application consumers and producers.

Here’s what it looks like:

We are launching the AWS Marketplace with many categories of development and IT software. They are grouped into 3 categories:

  • Software Infrastructure – Application Development, Application Stacks, Application Servers, Databases & Caching, Network Infrastructure, Operating Systems, and Security.
  • Developer Tools – Issue & Bug Tracking, Monitoring, Source Control, and Testing.
  • Business Software – Business Intelligence, Collaboration, Content Management, CRM, eCommerce, High Performance Computing, Media, Project Management, and Storage & Backup.

The AWS Marketplace includes pay-as-you-go products that are available in Amazon Machine Image (AMI) form and hosted software with a variety of pricing models. When you launch an AMI, the product will run on your own private EC2 instance and the usage charges (monthly and/or hourly) will be itemized on your AWS Account Activity report. Hosted software is run by the seller and accessed through a web browser.

The Details
Each product in the marketplace is described by a detail page. The page contains the information you’ll need to make an informed decision including an overview, a rating, versioning data, details on the support model for the product, a link to the EULA (End User License Agreement), and pricing for each AWS Region.

For this example, I will focus on the Zend Server. I can find it by browsing or by searching:

I can then choose from among a list of matching products:

I can read all about the product, and I can check on the pricing. I’ll pay for the software and for the AWS resources separately:

The software pricing can vary by EC2 instance type:

1-Click Launch
When I am ready to go I click the Continue button. I then have two launch options: 1-click and EC2 Console:

The 1-click launch process starts with sensible default values (as recommended by the software provider) that I can customize as desired by expanding the section of interest:

As you can see from the screen shot above, the Marketplace can use an existing EC2 security group or it can create a new one that’s custom tailored to the application’s requirements. Once everything is as I like it, I need only click on the Accept Terms and Launch button:

I can visit the Your Software section of the AWS Marketplace to see all of my subscriptions and all of the EC2 instances that they are running on:


The Access Software link routes directly to the admin page for the Zend Server. After accepting the license agreement and entering a password, I can proceed to the Zend Server console:



EC2 Console Launch
I can also choose to launch the Zend Server AMI through the EC2 console. You can do this if you want to launch multiple instances at the same time, exercise additional control over the security groups, launch the software within a VPC or on Spot Instances, or perform other types of customization:

The AWS Marketplace distributes and then tracks AMIs for each product across Regions. These AMIs are versioned and the versions are tracked; you have the ability to select the version of your choice when launching a product.

Selling on the AWS Marketplace
If you are an ISV (Independent Software Vendor) and you want to list your products in the AWS Marketplace, start here! Check out our listing guidelines and best practices guides, and then get in touch with us via the email address on that page. Products that fit within one of the existing categories will be given the highest priority. As I noted earlier, we’ll add additional categories over time.

— Jeff;

Microsoft SharePoint Server on AWS Reference Architecture White Paper

We have just published the Microsoft SharePoint Server on AWS Reference Architecture White Paper.

This white paper discusses general concepts regarding how to use SharePoint services on AWS and provides detailed technical guidance on how to configure, deploy, and run a SharePoint Server farm on AWS. It illustrates reference architecture for common SharePoint Server deployment scenarios and discusses their network, security, and deployment configurations so you can run SharePoint Server workloads in the cloud with confidence.This white paper is targeted to IT infrastructure decision-makers and administrators. After reading it, you should have a good idea on how to set up and deploy the components of a typical SharePoint Server farm on AWS.

Here’s what you will find inside:

  • SharePoint Server Farm Reference Architecture
  • Common SharePoint Server Deployment Scenarios
    • Intranet SharePoint Server Farm
    • Internet Website or Service Based on SharePoint Server
  • Implementing SharePoint Server Architecture Scenarios in AWS
    • Network Setup: 
      • Amazon VPC setups for Intranet and Public Website Scenarios
      • AD DS Setup and DNS Configuration
    • Server Setup and Configuration
      • Mapping SharePoint Server Roles and Servers to Amazon EC2 AMIs and Instance Types
      • SharePoint and SQL Server Configurations
    • Security
      • Security Groups
      • Network ACLs
      • Windows Instance Security
      • Administrator Access
      • Data Privacy
    • Deployment
    • Monitoring and Management
    • Backup and Recovery

Enjoy!

— Jeff;