Amazon Web Services Blog

  • AWS Public IP Address Ranges Now Available in JSON Form

    21 Nov 2014 | permalink

    Many of our customers have asked us for a detailed list of the IP address ranges assigned to and used by AWS. While the use cases vary from customer to customer, they generally involve firewalls and other forms of network access controls. In the past we have met this need by posting human-readable information to the EC2, S3, SNS, and CloudFront Forums.

    IP Ranges in JSON Form
    I am happy to announce that this information is now available in JSON form at https://ip-ranges.amazonaws.com/ip-ranges.json. The information in this file is generated from our internal system-of-record and is authoritative. You can expect it to change several times per week and should poll accordingly.

    Here are the the first couple of lines:

    {
      "syncToken": "1416523628",
      "createDate": "2014-11-20-22-51-01",
      "prefixes": [
        {
          "ip_prefix": "50.19.0.0/16",
          "region": "us-east-1",
          "service": "AMAZON"
        },
        {
          "ip_prefix": "54.239.98.0/24",
          "region": "us-east-1",
          "service": "AMAZON"
        },
    

    Valid values for the service key include "AMAZON", "EC2", "ROUTE53", "ROUTE53_HEALTHCHECKS", and "CLOUDFRONT." If you need to know all of the ranges and don't care about the service, use the "AMAZON" entries. The other entries are subsets of this one. Also, some of the services, such as S3, are represented in "AMAZON" and do not have an entry that is specific to the service. We plan to add additional values over time; code accordingly!

    For more information, read the documentation on AWS IP Address Ranges.

    -- Jeff;

    PS - By my count, there are now 10,130,200 IP addresses in the EC2 range. My code excludes the first (all zeroes) and last (all ones) address in each CIDR block.

  • New APN (AWS Partner Network) Blog

    The AWS Partner Network (APN) is a rapidly growing ecosystem of Consulting and Technology partners. These partners push the boundaries of what can be done with cloud computing by creating and bringing value-added solutions to their customers. Our goal is to continue to support the APN partners as they work to build successful businesses on the AWS platform. As our ecosystem grows, we continue to launch new programs, benefits, and content for APN Partners via the AWS Partner Network (APN).

    New APN Blog
    Today we are launching a new AWS Partner Network Blog that will serve as a central source for information that will be of interest to current and prospective APN Partners. We plan to provide up-to-date coverage of the entire partner ecosystem. Look for posts that discuss compelling APN solutions built on AWS, details on key AWS and APN launches of special interest to APN Partners, and stories of two or more APN Partners working jointly to provide a solution for a customer. The new blog is designed to serve as a real-time base for all things APN.

    In addition to APN Partner stories, the new blog will keep you informed of additions to the APN program. We'll continue to launch additional programs, such as the recent APN Competencies (including Storage and the Life Sciences, with more on the way), intended to highlight the APN Partners with proven expertise in particular solution areas such as Big Data or specific workloads (Microsoft is a good example) We also have a program designed for Managed Service Providers.

    The new blog will also put new APN launches in to perspective. You'll be among the first to learn about key information and why it is important to your organization. It will also cover new content on the APN Portal along with AWS Training and Certification launches and other news of special interest to APN Partners from across AWS.

    Finally, the blog will take a closer look at the AWS, APN, and partner announcements that were made at AWS re:Invent.

    -- Jeff;

  • Amazon AppStream Update - Access Windows Apps on Chromebooks, MacBooks, Kindle Fires, and More

    20 Nov 2014 in Amazon AppStream | permalink


    AppStream can provide our customers with easier access to the tools they need on a wider range of devices than in the past.

    Ray Milhem
    VP of Enterprise Solutions at ANSYS

    When I first wrote about Amazon AppStream last year, I described the AppStream APIs and showed you how to use them to modify an existing application to give it the ability to stream output to a wide variety of output devices. The AppStream SDK can be used to build customized streaming experiences that integrate local and remote applications in a unified fashion. As an example of what can be done when AppStream is used in this manner, see my blog post, Amazon AppStream Can Improve the New-User Experience for Eve Online.

    Today I would like to tell you about an important new feature for AppStream. You can now stream just about any existing Microsoft Windows application without having to make any code changes. You simply step through a simple installation and configuration process using the AWS Management Console. Once you've completed the process, your users can begin to use the application.

    This is a new way to deliver software that obviates the need for shipping CDs (you do remember those, right?) or waiting for massive downloads to complete. Your users can access the applications from devices that run FireOS, Android, Chrome, iOS, Mac OS X, or Microsoft Windows.

    On the development side, running the remote side of the application in a single, well-understood, cloud-based environment can dramatically shrink the size of the test matrix. The client application is relatively simple, with responsibility limited to authenticating users, decoding video streams, and relaying local events to AppStream. Because the run-time environment is well-understood and under your control, issues related to libraries, DLLs, and video drivers are no longer an issue.

    Finally, streaming applications from the cloud can protect your proprietary data and code from undesired exposure. Put it all together and you have a new and very powerful way to deliver applications to your users!

    Getting Started
    Let's take an existing Windows application and make it available via streaming! Since AppStream runs the application on EC2's GPU-equipped g2 instance type, I went to the NVIDIA Demos page and chose the Design Garage. Then I opened up the Console and selected AppStream:

    I clicked on Deploy an Application and filled in the details:

    Then I installed the application using the streamed copy of Windows running within the Console:


    The download of the installation package takes place over the AWS backbone, generally at very high speed. This is yet another cool benefit of the cloud-based AppStream model.

    To finalize the installation I clicked on the Set launch path button to tell AppStream where to find the application. Setting the path initiates the deployment process:

    The deployment process can take 30 minutes or more (up to several hours) depending on the size of the application. As part of the process, AppStream creates an Amazon Machine Image (AMI) containing the application.

    Once the deployment process is complete, AppStream will automatically launch a server and have it standing by to accept connections ( AppStream pricing is based on the total number of "streamed hours" per month, so you don't start to accrue any charges until the application is actually put to use).

    The console includes instructions and quick links so that I can easily test my application using a sample client:

    I downloaded the sample client and pasted in the quick link. Then I clicked on the Connect button and I was up and running without the need to install the application locally. Here's what I saw on my screen:

    The presentation was very responsive and free of lag (I deployed the app in US East (Northern Virginia) and accessed it from my desktop in Seattle). I was able to rotate and zoom the image quickly and efficiently. Although I used the Windows client for this demo, I could have also used the Chrome client. This would allow me to run the Design Garage on any platform that can run the Chrome browser — Chromebooks, Macs, Linux desktops, and more.

    In the example above I used the sample AppStream client. For production use you will need to customize the sample client or use it as the basis for your own, custom client. Your client will need to include a mechanism to authenticate users. For more information, read about Building a Client Application.

    Try it Now
    You should be able to think of all sorts of ways to put this new AppStream feature to use. You can deliver many types of applications (medical imaging, data visualization, and CAD all come to mind) to a very wide variety of mass-market devices without the need for lengthy downloads.

    AppStream is currently available in the US East (Northern Virginia) and Asia Pacific (Tokyo) Regions.

    I've saved the best part for last! You can try out this new feature at no cost as part of the AppStream Free Tier. The first 20 hour of streaming each month are free for one year. I'd like to invite you to go ahead, deploy an application, and take this new feature for a spin!

    -- Jeff;

  • Amazon Zocalo Update - Mobile Apps + 5 TB Files

    20 Nov 2014 in Amazon Zocalo | permalink

    I have a couple of pieces of good news for current and potential users of Amazon Zocalo. Both items are available now and you can start using them today.

    Zocalo Mobile Apps
    You can use our new mobile apps to access Zocalo on your iPhone or Android device using your corporate credentials. You can work offline, make comments, and securely share documents while you are in the air or on the go! The Android app is available on the Kindle and Google Play stores. The iPhone app is in the iOS AppStore.

    Here's what the Android version of the app looks like:

    And here's the iPhone version of the app:

    Support for 5 TB Files
    Zocalo users have been asking us to support larger files. Many of the requests have been coming from health care and media companies. For example, one of our largest Zocalo customers is a media production company. They appreciate the fact that Zocalo stores data in S3 and asked us to match the existing S3 object size limit of 5 TB.

    I am happy to report that you can now use Zocalo to upload, sync, and share files of up to 5 TB! As part of this change, the existing sync clients have been improved and now handle uploads and downloads with greater efficiency. They will now automatically resume large uploads and downloads as necessary (this feature makes use of S3's existing support for multipart uploads).

    The sync clients will prompt you to update. If you’re not running a sync client already, you can install one today.

    -- Jeff;

  • CloudSearch Update - Price Reduction, Hebrew & Japanese Support, Partitioning, CloudTrail

    I've got some good news for current and potential users of Amazon CloudSearch. As you may already know, CloudSearch is a a fully-managed service that makes it easy to setup, operate, and scale a search service for your website or application. If you use CloudSearch, you will benefit from a price reduction, additional language support, and control over domain partitioning (we released these features earlier this year but I didn't have a chance to blog about them at that time). You can also take advantage of the recently released support for AWS CloudTrail.

    Price Reduction
    An ever-increasing number of AWS customers are adopting CloudSearch and we are scaling accordingly. We are reducing the hourly charge for CloudSearch by up to 50%, across all AWS Regions and search instance types. This change is effective as of November 1, 2014 and will take effect with no action on your part. With this change, the overall cost to run CloudSearch compares very favorably to the cost of setting up, running, and scaling your own search infrastructure.

    Check out the CloudSearch Pricing page for more information.

    Additional Language Support
    Earlier this year we introduced language-specific text processing for Hebrew. With this addition, CloudSearch now supports a total of 34 languages. Here's a search of some Hebrew-language content:

    In mid-October we added support for custom tokenization dictionaries for Japanese. You can now control how CloudSearch tokenizes Japanese by adding a custom tokenization dictionary to the analysis scheme that you use for fields that contain Japanese-language text. To learn more, read about Customizing Japanese Tokenization in the CloudSearch Developer Guide.

    Control Over Partitioning
    If you are using the m2.2xlarge search instance type, you can now preconfigure the number of index partitions for your search domain. Preconfiguring a domain will improve the performance of large uploads. You can also add partitions to boost query performance by reducing the number of documents per partition. CloudSearch will still scale the domain up and down based on the volume of data and traffic, but the number of partitions will never drop below your desired partition count. You can exercise control over partitioning from the AWS Management Console, the CloudSearch APIs, or the AWS Command Line Interface (CLI). You can set it when you create a search domain:

    And you can update it later:

    CloudTrail Support
    Last month we added AWS CloudTrail support to CloudSearch. You can now use CloudTrail to get a history of the calls that are made to the CloudSearch API. The calls are recorded and delivered to an Amazon S3 bucket. To learn more, read about Logging Amazon CloudSearch Configuration Service Calls Using AWS CloudTrail.

    -- Jeff;

  • AWS Week in Review - November 10, 2014

    17 Nov 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week. I have augmented selected items with links to their re:Invent presentations.

    Monday, November 10
    Tuesday, November 11
    Wednesday, November 12
    Thursday, November 13
    Friday, November 14

    Here are some of the events that we have on tap for the next week or two (visit the AWS Events page for more):

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • Larger and Faster Elastic Block Store (EBS) Volumes

    As Werner just announced from the stage at AWS re:Invent, we have some great news for users of Amazon Elastic Block Store (EBS). We are planning to support EBS volumes that are larger and faster than ever before! Here are the new specs:

    • General Purpose (SSD) - You will be able to create volumes that store up to 16 TB and provide up to 10,000 baseline IOPS (up from 1 TB and 3,000 baseline IOPS). Volumes of this type will continue to support bursting to even higher performance levels (see my post on New SSD-Backed Elastic Block Storage for more information).
    • Provisioned IOPS (SSD) - You will be able to create volumes that store up to 16 TB and provide up to 20,000 Provisioned IOPS (up from 1 TB and 4,000 Provisioned IOPS).

    Newly created volumes will transfer data more than twice as fast, with a maximum throughput of 160 MBps for General Purpose (SSD) and 320 MBps for Provisioned IOPS (SSD).

    With more room to store data and the ability to get to it even more rapidly, you can now run demanding, large-scale workloads without having to stripe multiple volumes together or to do a complex dance when it comes time to create and coordinate snapshots. You can just create the volume and turn your attention to your data and your application.

    Stay tuned for more information on availability!

    -- Jeff;

  • New Compute-Optimized EC2 Instances

    13 Nov 2014 in Amazon EC2 | permalink

    Our customers continue to increase the sophistication and intensity of the compute-bound workloads that they run on the Cloud. Applications such as top-end website hosting, online gaming, simulation, risk analysis, and rendering are voracious consumers of CPU cycles and can almost always benefit from the parallelism offered by today's multicore processors.

    The New C4 Instance Type
    Today we are pre-announcing the latest generation of compute-optimized Amazon Elastic Compute Cloud (EC2) instances. The new C4 instances are based on the Intel Xeon E5-2666 v3 (code name Haswell) processor. This custom processor, designed specifically for EC2, runs at a base speed of 2.9 GHz, and can achieve clock speeds as high as 3.5 GHz with Turbo boost. These instances are designed to deliver the highest level of processor performance on EC2. If you've got the workload, we've got the instance!

    Here's the lineup (these specs are preliminary and could change a bit before launch time):

    Instance Name vCPU Count RAM Network Performance
    c4.large 2 3.75 GiB Moderate
    c4.xlarge 4 7.5 GiB Moderate
    c4.2xlarge 8 15 GiB High
    c4.4xlarge 16 30 GiB High
    c4.8xlarge 36 60 GiB 10 Gbps
    These instances are a great match for the SSD-Backed Elastic Block Storage that we introduced earlier this year. EBS Optimization is enabled by default for all C4 instance sizes, and is available to you at no extra charge. C4 instances also allow you to achieve significantly higher packet per second (PPS) performance, lower network jitter, and lower network latency using Enhanced Networking.

    Like most of our newer instance types, the C4 instances will use Hardware Virtualization (HVM) in order to get the best performance from the underlying CPU, and will run within a Virtual Private Cloud.

    The c4.8xlarge instances give you the ability to fine-tune the processor's performance and power management (which can affect maximum Turbo frequencies) using P-state and C-state control. They also give you 36 vCPUs for improved compute performance.

    Stay tuned for pricing and additional technical information!

    -- Jeff;

  • New Event Notifications for Amazon S3

    13 Nov 2014 in Amazon S3 | permalink

    Many AWS customers have been building applications that use Amazon Simple Storage Service (S3) for cost-efficient and highly scalable persistent or temporary object storage. Some of them want to initiate processing on the objects as they arrive; others want to capture information about the objects and log it for tracking or security purposes. These customers have been asking for a reliable and scalable way to be notified when an S3 object is created or overwritten.

    S3 Event Notifications
    Today we are launching a new event notification feature for S3. The bucket owner (or others, as permitted by an IAM policy) can now arrange for notifications to be issued to Amazon Simple Queue Service (SQS) or Amazon Simple Notification Service (SNS) when a new object is added to the bucket or an existing object is overwritten. Notifications can also be delivered to AWS Lambda for processing by a Lambda function. Here's the general flow:

    Here's what you need to do in order to start using this new feature with your application:

    1. Create the queue, topic, or Lambda function (which I'll call the target for brevity) if necessary.
    2. Grant S3 permission to publish to the target or invoke the Lambda function. For SNS or SQS, you do this by applying an appropriate policy to the topic or the queue. For Lambda, you must create and supply an IAM role, then associate it with the Lambda function.
    3. Arrange for your application to be invoked in response to activity on the target. As you will see in a moment, you have several options here.
    4. Set the bucket's Notification Configuration to point to the target.
    From that point forward the events will be reliably delivered to the target as appropriate.

    Notifications are configured at the bucket level and apply to all of the objects in the bucket (we plan to provide control at a finer level at some point). You can elect to receive notification for any or all of the following events:

    • s3:ObjectCreated:Put - An object was created by an HTTP PUT operation.
    • s3:ObjectCreated:Post - An object was created by HTTP POST operation.
    • s3:ObjectCreated:Copy - An object was created an S3 copy operation.
    • s3:ObjectCreated:CompleteMultipartUpload - An object was created by the completion of a S3 multi-part upload.
    • s3:ObjectCreated:* - An object was created by one of the event types listed above or by a similar object creation event added in the future.
    • s3:ReducedRedundancyObjectLost - An S3 object stored with Reduced Redundancy has been lost.

    Notification Details
    Each notification is delivered as a JSON object with the following fields:

    • Region
    • Timestamp
    • Event Type (as listed above)
    • Request Actor Principal ID
    • Source IP of the request
    • Request ID
    • Host ID
    • Notification Configuration Destination ID
    • Bucket Name
    • Bucket ARN
    • Bucket Owner Principal ID
    • Object Key
    • Object Size
    • Object ETag
    • Object Version ID (if versioning is enabled on the bucket)

    For use cases that require strong consistency on S3, it is a good practice to use versioning when you are overwriting objects. With versioning enabled for a bucket, the event notification will include the version ID. Your event handle can use the ID to fetch the latest version of the object. The notification also includes the ETag of the new object. Your code can Get the object and verify the ETag before processing. If the ETags do not match, you can defer processing by posting the message back to the SNS or SQS target. Note that eventual consistency is a concern only if your application allows existing objects to be overwritten.

    Configuring Notifications Using the Console
    Here is how to configure an event notification using the AWS Management Console. I have a bucket named jbarr-upload and I want to send notifications to an SNS topic named jbarr-upload-notify. I have already configured the topic to send an email to me (this would generate an overwhelming amount of email and would not be suitable for an actual application, but it makes for a good demo). I start by granting permission for S3 to publish to the topic:

    Then I configure the bucket to send notification to my topic:

    I can use the menu to choose the event types that are of interest to me:

    For testing purposes, I use the console to upload an object:

    Here's the resulting email notification (I've formatted the JSON for readability in order to get my point across):

    As I noted earlier, email notification is inadvisable for production-scale applications.

    I'm confident that you can pick up from where I left off and start integrating this feature into your own applications. You can, of course, use the AWS SDKs to configure and manage notifications.

    Things to Know
    Here are a couple of things to keep in mind as you start to think about the best way to use these new notifications as part of your application:

    • Delivery Latency - Notifications are delivered to the target in well under a second.
    • Cost - There is no charge for this feature. You will pay the usual messaging and execution charges for SQS, SNS, and Lambda (many applications can run within the AWS Free Tier).
    • Regions - The bucket and the target (SQS, SNS, or Lambda) must reside in the same AWS Region.
    • Event Types - You can configure one notification per event type per bucket.
    • Delivery Reliability - S3 is designed to deliver notifications with a very high degree of reliability. It includes built-in backoff and retry mechanisms to deal with momentary issues that might affect the deliverability of messages to any of the three types of targets.
    • Additional Event Types - We expect to add additional event types over time and your feedback will help us to prioritize our work. Please feel free to tell us more about your needs in the S3 Forum.

    Availability
    This feature is available now and you can start using it today! I am looking forward to hearing all about the interesting use cases that you come up with.

    -- Jeff;

  • AWS Lambda - Run Code in the Cloud

    We want to make it even easier for you to build applications that run in the Cloud. We want you to be able to focus on your code, and to work within a cloud-centric environment where scalability, reliability, and runtime efficiency are all high enough to be simply taken for granted!

    Today we are launching a preview of AWS Lambda, a brand-new way to build and run applications in the cloud, one that lets you take advantage of your existing programming skills and your knowledge of AWS. With Lambda, you simply create a Lambda function, give it permission to access specific AWS resources, and then connect the function to your AWS resources. Lambda will automatically run code in response to modifications to objects uploaded to Amazon Simple Storage Service (S3) buckets, messages arriving in Amazon Kinesis streams, or table updates in Amazon DynamoDB.

    Lambda is a zero-administration compute platform. You don't have to configure, launch, or monitor EC2 instances. You don't have to install any operating systems or language environments. You don't need to think about scale or fault tolerance and you don't need to request or reserve capacity. A freshly created function is ready and able to handle tens of thousands of requests per hour with absolutely no incremental effort on your part, and on a very cost-effective basis.

    Let's dig in! We'll take a more in-depth look at Lambda, sneak a peek at the programming model and runtime environment, and then walk through a programming example. As you read through this post, keep in mind that we have plenty of items on the Lambda roadmap and that what I am able to share today is just the first step on what we expect to be an enduring and feature-filled journey.

    Lambda Concepts
    The most important Lambda concept is the Lambda function, or function for short. You write your functions in Node.js (an event-driven, server side implementation of JavaScript).

    You upload your code and then specify context information to AWS Lambda to create a function. The context information specifies the execution environment (language, memory requirements, a timeout period, and IAM role) and also points to the function you'd like to invoke within your code. The code and the metadata are durably stored in AWS and can later be referred to by name or by ARN (Amazon Resource Name). You an also include any necessary third-party libraries in the upload (which takes the form of a single ZIP file per function).

    After uploading, you associate your function with specific AWS resources (a particular S3 bucket, DynamoDB table, or Kinesis stream). Lambda will then arrange to route events (generally signifying that the resource has changed) to your function.

    When a resource changes, Lambda will execute any functions that are associated with it. It will launch and manage compute resources as needed in order to keep up with incoming requests. You don't need to worry about this; Lambda will manage the resources for you and will shut them down if they are no longer needed.

    Lambda is accessible from the AWS Management Console, the AWS SDKs and the AWS Command Line Interface (CLI). The Lambda APIs are fully documented and can be used to connect existing code editors and other development tools to Lambda.

    Lambda Programming Model
    Functions are activated after the associated resource has been changed. Execution starts at the designated Node.js function and proceeds from there. The function has access (via a parameter supplied along with the POST) to a JSON data structure. This structure contains detailed information about the change (or other event) that caused the function to be activated.

    Lambda will activate additional copies of function as needed in order to keep pace with changes. The functions cannot store durable state on the compute instance and should use S3 or DynamoDB instead.

    Your code can make use of just about any functionality that is intrinsic to Node.js and to the underlying Linux environment. It can also use the AWS SDK for JavaScript in Node.js to make calls to other AWS services.

    Lambda Runtime Environment
    The context information that you supply for each function specifies a maximum execution time for the function. This is typically set fairly low (you can do a lot of work in a couple of seconds) but can be set to up 60 seconds as your needs dictate.

    Lambda uses multiple IAM roles to manage access to your functions and your AWS resources. The invocation role gives Lambda permission to run a particular function. The execution role gives a function permission to access specific AWS resources. You can use distinct roles for each function in order to implement a fine-grained set of permissions.

    Lambda monitors the execution of each function and stores request count, latency, availability, and error rate metrics in Amazon CloudWatch. The metrics are retained for 30 days and can be viewed in the Console.

    Here are a few things to keep in mind when as you start to think about how you will put Lambda to use:

    • The context information for a function specifies the amount of memory needed to run it. You can set this to any desired value between 128 MB and 1 GB. The memory setting also determines the amount of CPU power, network bandwidth, and I/O bandwidth that are made available to the function.
    • Each invocation of a function can make use of up to 256 processes or threads. It can consume up to 512 MB of local storage and up to 1,024 file descriptors. It can also create up to 10 simultaneous outbound network connections.
    • Lambda imposes a set of administrative limits on each AWS account. During the preview, you can have up to 25 invocation requests underway simultaneously.

    Lambda in Action
    Let's step through the process of creating a simple function using the Management Console. As I mentioned earlier, you can also do this from the SDKs and the CLI. The console displays all of my functions:

    I simply click on Create Function to get started. Then I fill in all of the details:

    I name and describe my function:

    Then I enter the code or upload a ZIP file. The console also offers a choice of sample code snippets to help me to get started:

    Now I tell Lambda which function to run and which IAM role to use when the code runs:

    I can also fine-tune the memory requirements and set a limit on execution time:

    After I create my function, I can iteratively edit and test it from within the Console. As you can see, the pane on the left shows a sample of the JSON data that will be passed to my function:

    When the function is working as expected, I can attach it to an event source such as Amazon S3 event notification. I will to provide an invocation role in order to give S3 the permission that it needs to have in order to invoke the function:

    Lambda collects a set of metrics for each of my functions and sends them to Amazon CloudWatch. I can view the metrics from the Console:

    On the Roadmap
    We have a great roadmap for Lambda! While I won't spill all of the beans today, I will tell you that we expect to add support for additional AWS services and other languages. As always, we love your feedback; please leave a note in the Lambda Forum.

    Pricing & Availability
    Let's talk about pricing a bit before wrapping up! Lambda uses a fine-grained pricing model. You pay for compute time in units of 100 milliseconds and you pay for each request. The Lambda free tier includes 1 million free requests per month and up to 3.2 million seconds of compute time per month depending on the amount of memory allocated per function.

    Lambda is available today in preview form in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) Regions. If you would like to get started, register now.

    -- Jeff;