Category: Amazon EC2


New Amazon EC2 Micro Instances – New, Low Cost Option for Low Throughput Applications

by Jeff Barr | on | in Amazon EC2 |

I cant tell you how many of you have told me youd like to run smaller applications at lower cost on EC2. These applications are typically low traffic/low throughputweb applications, web site hosting, various types of periodic cron jobs and the like.

Im happy to say we have now built an instance type exactly for these purposes, called Micro instances, starting at $0.02 (two cents) per hour for Linux/Unix and $0.03 (three cents) per hour for Windows.

Micro Instances (t1.micro) provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. They are available now in all Regions. You can buy Reserved Micro Instances and you can acquire Micro Instances on the Spot Market. Interestingly enough, they are available in both 32 and 64 bit flavors, both with 613 MB of RAM. The Micro Instances have no local, ephemeral storage, so you’ll need to Boot from EBS.

CloudWatch can be used to watch the level of CPU utilization to understand when the available CPU bursting has been used within a given time period. If your instance’s CPU utilization is approaching 100% then you may want to scale (using Auto Scaling) to additional Micro instances or to a larger instance type. In fact, at this low a price you could run CloudWatch configured for Auto Scaling with two Micro instances behind an Elastic Load Balancer for just under the price of one CloudWatch-monitored Standard Small instance.

While designed to host web applications and web sites that don’t receive all that much traffic (generally tens of requests per minute, depending on how much CPU time is needed to process the request), I’m pretty sure that you’ll be able to put this new instance type to use in some interesting ways. Here are some of my thoughts:

  1. DNS servers, load balancers, proxies, and similar services that handle a relatively low volume of requests.
  2. Lightweight cron-driven tasks such as monitoring, health checks, or data updates.
  3. Hands-on training and other classroom use.

Feel free to post your ideas (and your other thoughts) in the comments.

Update: The AWS Simple Monthly Calculator now includes the Micro instances. The calculation at right illustrates the costs for a three year Reserved Instance running Linux/Unix full time.

 

— Jeff;

AWS Management Console Support for the Amazon Virtual Private Cloud

by Jeff Barr | on | in Amazon EC2, Amazon VPC |

The AWS Management Console now supports the Amazon Virtual Private Cloud (VPC). You can now create and manage a VPC and all of the associated resources including subnets, DHCP Options Sets, Customer Gateways, VPN Gateways and the all-important VPN Connection from the comfort of your browser.

Put it all together and you can create a secure, seamless bridge between your existing IT infrastructure and the AWS cloud in a matter of minutes. You’ll need to get some important network addressing information from your network administrator beforehand, and you’ll will need their help to install a configuration file for your customer gateway.

Here are some key VPC terms that you should know before you should read the rest of this post (these were lifted from the Amazon VPC Getting Started Guide):

VPC – An Amazon VPC is an isolated portion of the AWS cloud populated by infrastructure, platform, and application services that share common security and interconnection. You define a VPC’s address space, security policies, and network connectivity.

Subnet – A segment of a VPC’s IP address range that Amazon EC2 instances can be attached to.

VPN Connection – A connection between your VPC and data center, home network, or co-location facility. A VPN connection has two endpoints: a Customer Gateway and a VPN Gateway.

Customer Gateway – Your side of a VPN connection that maintains connectivity.

VPN Gateway – The Amazon side of a VPN connection that maintains connectivity.

Let’s take a tour through the new VPC support in the console. As usual, it starts out with a new tab in the console’s menu bar:

The first step is to create a VPC by specifying its IP address range using CIDR notation. I’ll create a “/16” to allow up to 65536 instances (the actual number will be slightly less because VPC reserves a few IP addresses in each subnet) in my VPC:

The next step is to create one or more subnets within the IP address range of the VPC. I’ll create a pair, each one covering half of the overall IP address range of my VPC:


The console shows all of the subnets and the number of available IP addresses in each one:

You can choose to create a DHCP Option Set for additional control of domain names, IP addresses, NTP servers, and NetBIOS options. In many cases the default option set will suffice.

And the next step is to create a Customer Gateway to represent the VPN device on the existing network (be sure to use the BGP ASN and IP Address of your own network):

We’re almost there! The next step is to create a VPN Gateway (to represent the VPN device on the AWS cloud) and to attach it to the VPC:


The VPC Console Dashboard displays the status of the key elements of the VPC:

With both ends of the connection ready, the next step is to make the connection between your existing network and the AWS cloud:

This step (as well as some of the others) can take a minute or two to complete.

Now it is time to download the configuration information for the customer gateway.


The configuration information is provided as a text file suitable for use with the specified type of customer gateway:

Once the configuration information has been loaded into the customer gateway, the VPN tunnel can be established and it will be possible to make connections from within the existing network to the newly launched EC2 instances.

I think that you’ll agree that this new feature really simplifies the process of setting up a VPC, making it accessible to just about any AWS user. What do you think?

–Jeff;

Amazon EC2 Price Reduction

by Jeff Barr | on | in Amazon EC2, Price Reduction |

We’re always looking for ways to make AWS an even better value for our customers. If you’ve been reading this blog for an extended period of time you know that we reduce prices on our services from time to time.

Effective September 1, 2010, we’ve reduced the On-Demand and Reserved Instance prices on the m2.2xlarge (High-Memory Double Extra Large) and the m2.4xlarge (High-Memory Quadruple Extra Large) by up to 19%.  If you have existing Reserved Instances your hourly usage rate will automatically be lowered to the new usage rate and your estimated bill will reflect these changes later this month.  As an example, the hourly cost for an m2.4xlarge instance running Linux/Unix in the us-east Region from $2.40 to $2.00. This price reduction means you can now run database, memcached, and other memory-intensive workloads at substantial savings. Here’s the full EC2 price list.

As a reminder, there are many different ways to optimize your costs. When compared to On-Demand instances, Reserved Instances enable you to reduce your overall instance costs by up to 56%.  You pay a low, one-time fee to reserve an instance for a one or three year period. You can then run that instance whenever you want, at a greatly reduced hourly rate.

For background processing and other jobs where you have flexibility in when they run, you can also use Spot Instances by placing a bid for unused capacity. You job will run as long as your bid is higher than the current spot price.

— Jeff;

 

 

Happy 4th Birthday Amazon EC2

by Jeff Barr | on | in Amazon EC2 |

I almost missed a really important anniversary! Yesterday marked Amazon EC2‘s fourth birthday. Here are some of the ways that EC2 has grown and changed in the last four years:

Category 2006 2010
Regions One Four
Availability Zones One Ten Availability Zones
Instance Types One Nine
Pricing Models One Three
Storage Ephemeral Storage Ephemeral Storage
Elastic Block Store
Operating Systems Linux Linux, Windows, OpenSolaris
Management Tools Command-Line Tools Command-Line Tools
AWS Management Console
Third-Party Tools
Ancillary Services Elastic Load Balancing, Auto Scaling, CloudWatch
High Performance Computing Elastic Map Reduce, Cluster Compute Instances

We’ve done quite a bit, but we’re not resting, not for a minute. We have a lot of open positions on the AWS team, including a really interesting developer position within the EC2 team. This developer will focus on EC2’s dynamic market pricing features. In addition to experience with Ruby, Perl, Java, C, or C++, candidates should have some experience building large-scale distributed systems and an interest in operational scheduling, optimization, and constraint satisfaction. You can read more here and you can send your resume directly to amazon-ec2-spot-jobs@amazon.com.

While I am on the subject of anniversaries, eight years ago this month I abandoned my full-time consulting practice to take a development position with the Amazon Associates Team, with the agreement that I could spend some of my time helping out with the effort to create and market the E-Commerce Service (which has since become the Product Advertising API). A few months in, I was asked if I would mind speaking at a conference. I guess I did ok, because they asked me to do another one, and before too long they invited me to apply for the position of Web Services Evangelist. I took on that title in the spring of 2003 and have been spreading the word about our web service efforts ever since. All things considered, this is a really awesome place to work. Day after day, week after week, things get more and more exciting around here. The pace is quick and I do my best to keep up. We do our best to understand and to meet the needs of our customers with regard to features, reliability, scale, business models, and price. I get to work with and to learn from a huge number of world-class intellects. If this sounds like the kind of place for you, check out our list of open jobs and apply today!

— Jeff;

Use Your Own Kernel with Amazon EC2

by Jeff Barr | on | in Amazon EC2 |

You can now use the Linux kernel of your choice when you boot up an Amazon EC2 instance. 

We have created a set of AKIs (Amazon Kernel Images) which contain the PV-Grub loader. This loader simply chain-boots the kernel provided in the associated AMI (Amazon Machine Image). Net-net, your instance ends up running the kernel in the AMI instead of the kernel specified in the boot process.

You need to install an “EC2 compatible” kernel and create an initrd (initial RAM disk) as part of your AMI. You also need to create a menu (/boot/grub/menu.lst) for the Grub boot loader. Once you’ve done this you can create the AMI and then launch instances by using one of the PV-Grub “kernels” as described above. You may find this document to be helpful if you want to learn more about the Linux boot process.

To be compatible with EC2, a Linux kernel must support Xen’s pv_ops (paravirtual ops) infrastructure with XSAVE disabled or the Xen 3.0.2 interface. The following kernels have been tested and/or have vendor support:

  • Fedora 8-12 Xen kernels
  • SLES/openSUSE 10x, 11.0, and 11.1 Xen kernels
  • SLES/openSUSE 11.x EC2 Variant
  • Ubuntu EC2 Variant
  • RHEL 5.x
  • CentOS 5.x

Other kernels may not start reliably within EC2. We’re working with the providers of popular AMIs to make sure that they will start to use PV-Grub in the near future.

You can read more about this in our “Enabling User Provided Kernels in Amazon EC2” document.

— Jeff;

PS – You could (if you are sufficiently adept) use this facility to launch an operating system that we don’t support directly (e.g. FreeBSD). If you manage to do this, please feel free to let me know.

New Amazon EC2 Instance Type – The Cluster Compute Instance

by Jeff Barr | on | in Amazon EC2 |

A number of AWS users have been using Amazon EC2 to solve a variety of computationally intensive problems. Here’s a sampling:

  • Atbrox and Lingit use Elastic MapReduce to build data sets that help individuals with dyslexia to improve their reading and writing skills.
  • Systems integrator Cycle Computing helps Varian to run compute-intensive Monte Carlo simulations.
  • Harvard Medical School‘s Laboratory for Personalized Medicine creates innovative genetic testing models.
  • Pathwork Diagnostics runs tens of thousands of models to help oncologists to diagnose hard-to-identify cancer tumors.
  • Razorfish processes huge datasets on a very compressed timescale.
  • The Server Labs helps the European Space Agency to build the operations infrastructure for the Gaia project.

Some of these problems are examples of what are called embarrassingly parallel computing.  Others leverage the Hadoop framework for data-intensive computing, spreading the workload across a large number of EC2 instances.

Our customers have also asked us about the ability to run even larger and more computationally complex workloads in the cloud.

It is clear that people are now figuring out that they can do HPC (High-Performance Computing) in the cloud. We want to make it even easier and more efficient for them to do so!

Our new Cluster Compute Instances will fit the bill. With Cluster Compute Instances, you can now run many types of large-scale network-intensive jobs without losing the core advantages of EC2: a pay-as-you-go pricing model and the ability to scale up and down to meet your needs.

Each Cluster Compute Instance consists of a pair of quad-core Intel “Nehalem” X5570 processors with a total of 33.5 ECU (EC2 Compute Units), 23 GB of RAM, and 1690 GB of local instance storage, all for $1.60 per hour.

Because many HPC applications and other network-bound applications make heavy use of network communication, Cluster Compute Instances are connected using a 10 Gbps network. Within this network you can create one or more placement groups of type “cluster” and then launch Cluster Compute Instances within each group. Instances within each placement group of this type benefit from non-blocking bandwidth and low latency node to node communication.

The EC2 API’s, the command-line tools, and the AWS Management Console have all been updated to support the creation and use of placement groups. For example, the following pair of commands creates a placement group called biocluster and then launches 8 Cluster Compute Instances inside of the group:

$ ec2-create-placement-group biocluster -s cluster

$ ec2-run-instances ami-2de43f55 –type cc1.4xlarge –placement-group biocluster -n 8

The new instance type is now available for Linux/UNIX use in a single Availability Zone in the US East (Northern Virginia) region. We’ll support it in additional zones and regions in the future. You can purchase individual Reserved Instances for a one or a three year term, but you can’t buy them within specific cluster placement groups just yet. There is a default usage limit for this instance type of 8 instances (providing 64 cores). If you wish to run more than 8 instances, you can request a higher limit using the Amazon EC2 instance request form.

The Cluster Compute Instances use hardware-assisted (HVM) virtualization instead of the paravirtualization used by the other instance types and requires booting from EBS, so you will need to create a new AMI in order to use them. We suggest that you use our Centos-based AMI as a base for your own AMIs for optimal performance. See the EC2 User Guide or the EC2 Developer Guide for more information.

The only way to know if this is a genuine HPC setup is to benchmark it, and we’ve just finished doing so. We ran the gold-standard High Performance Linpack benchmark on 880 Cluster Compute instances (7040 cores) and measured the overall performance at 41.82 TeraFLOPS using Intel’s MPI (Message Passing Interface) and MKL (Math Kernel Library) libraries, along with their compiler suite. This result places us at position 146 on the Top500 list of supercomputers. The input file for the benchmark is here and the output file is here.

Putting this all together, I think that we have put together a true fire-breathing dragon of an offering. You can now get world-class compute and network performance on an economical, pay-as-you-go basis.  The individual instances perform really well, and you can tie a bunch of them together using a fast network to attack large-scale problems. I’m fairly certain that you can’t get this much compute power so fast or so economically anywhere else.

I’m looking forward to writing up and sharing some of the success stories from the customers who’ve been helping us to test the Cluster Compute instances during our private beta test. Feel free to share your own success stories with me once you’ve had a chance to give them a try.

Update – Here’s some additional info:

— Jeff;

New VPC Features: IP Address Control and Config File Generation

by Jeff Barr | on | in Amazon EC2, Amazon VPC |

We’ve added two new features to the Amazon Virtual Private Cloud (VPC) to make it more powerful and easier to use. Here’s the scoop:

  • IP Address Control – You can now assign the IP address of your choice to each of the EC2 instances that you launch in your Virtual Private Cloud. The address must be within the range of addresses that you designated for the VPC, it must be available for use within the instance’s network subnet, and it must not conflict with any of the addresses that are reserved for internal use by AWS. You can specify the desired address as an optional parameter to the RunInstances function. This will allow you to have additional control of your network configuration, and has been eagerly anticipated by many of our customers. Two use cases that we’ve heard about already are running DNS servers and Active Directory Domain Controllers.
  • Config File Generation – VPC can now generate configuration files (example at right) for several different types of devices including the Cisco ISR and a number of Juniper products including the J-Series Service Router, the SSG (Secure Services Gateway), and the ISG (Integrated Security Gateway). The files can be generated from the command line or from within ElasticFox. Generating the config files in this way lets you avoid common configuration issues and allows you to be up and running in minutes.
 

If you want to connect a Linux-based VPN gateway to your Virtual Private Cloud, take a look at Amazon VPC With Linux. This article will show you how to set up IPSec and BGP routing and includes detailed configuration information.

If you are running OpenSolaris, take a look at the OpenSolaris VPC Gateway Tool.

— Jeff;

Big Data Workshop and EC2

by Jeff Barr | on | in Amazon EC2, Amazon SDB, Customer Success |

Many fields in industry and academia are experiencing an exponential growth in data production and throughput, from social graph analysis to video transcoding to high energy physics. Constraints are everywhere when working with very large data sets, and provisioning sufficient storage and compute capacity for these fields is challenging.

This is particularly true for biological sciences after the recent quantum leap in DNA sequencing technology. These advances represented a step change for the field of genomics, which had to learn quickly about how housing and processing terabytes of data through complex, often experimental workflows.

Processing data of this scale for a single user is challenging, but moving to the cloud meant Michigan State University were able to provide real world training to whole groups of new scientists using Amazon’s EC2 and S3 services.

Titus Brown writes about his experiences of running a next-generation sequencing workshop using Amazon’s Web Services in a pair of blog posts:

“Students can choose whatever machine specs they need in order to do their analysis. More memory? Easy. Faster CPU needed? No problem.

All of the data analysis takes place off-site. As long as we can provide the data sets somewhere else (I’ve been using S3, of course) the students don’t need to transfer multi-gigabyte files around.

The students can go home, rent EC2 machines, and do their own analyses — without their labs buying any required infrastructure.”

After the two week event:

“I have little doubt that this course would have been nearly impossible (and either completely ineffective or much more expensive) without it.

In the end, we spent more on beer than on computational power. That says something important to me.”

A great example of using EC2 for ad-hoc, scientific computation and reaping the rewards of a cloud infrastructure for low cost, reproducibility and scale.

~ Matt

New: CloudWatch Metrics for Amazon EBS Volumes

by Jeff Barr | on | in Amazon EC2 |

If you already have some EBS (Elastic Block Store) volumes, stop reading this post now!

Instead, open up the AWS Management Console in a fresh browser tab, select the Amazon EC2 tab and click on Volumes (or use this handy shortcut to go directly there). Click on one of your EBS volumes and you’ll see a brand new Monitoring tab. Click on the tab you’ll see ten graphs with information about the performance of the volume.

For those of you without any EBS volumes (what are you waiting for?), here’s what you are missing:

Effective immediately, we now store eight metrics in Amazon CloudWatch for each of your EBS volumes. The metrics are stored with a granularity of five minutes and each data point represents the activity over the period. Here’s what we store for you:

  • VolumeReadBytes – The number of bytes read from the volume over the five minute period.
  • VolumeWriteBytes – The number of bytes written to the volume over the five minute period.
  • VolumeReadOps – The number of of read operations performed on the volume in the period.
  • VolumeWriteOps – The number of write operations performed on the volume in the period.
  • VolumeTotalReadTime – The total amount of waiting time consumed by all of the read operations which completed during the period.
  • VolumeTotalWriteTime – The total amount of waiting time consumed by all of the write operations which completed during the period.
  • VolumeIdleTime – The amount of time when no read or write operations were waiting to be completed during the period.
  • VolumeQueueLength – The average number of read and write operations waiting to be completed during the period.

You can access all of this from the CloudWatch API and the CloudWatch command-line (API) tools of course.

You can use these metrics to diagnose performance issues, learn more about the level of usage of each of your volumes, or to track long-term performance trends for your application.

There is no additional charge for monitoring EBS volumes and the metrics are stored for two weeks.

— Jeff;

Building three-tier architectures with security groups

by Jeff Barr | on | in Amazon EC2, Architecture, Security |

Update (17 June): I’ve changed the command-line examples to reflect current capabilities of our SOAP and Query APIs. They do, in fact, allow specifying a protocol and port range when you’re using another security group as the traffic origin. Our Management Console will support this functionality at a later date.

During a recent webcast an attendee asked a question about building multi-tier architectures on AWS. Unlike with traditional on-premise physical deployments, AWS’s virtualization of compute, storage, and network elements requires that you think differently about how to build network segregation into your projects. There are no distinct physical networks, no VLANs, and no DMZs. So how can you construct the equivalent of traditional three-tier architectures?

Our security whitepaper alludes to the possibility (pp. 5-6, November 2009 edition). In my security presentations I show this diagram to illustrate conceptually how a three-tier architecture can be built:

EC2 three-tier architecture V2

Security groups: a quick review

Before we explore how to define the architecture, let’s take a moment to review some critical details about how security groups work.

A security group is a semi-stateful firewall (more on this in a moment) that contains one or more rules defining which traffic is permitted into an instance. Rules contain the following elements:

  • The permitted protocol (TCP or UDP)
  • The permitted destination port range (more on this in a moment, too)
  • The permitted source IP address range or originating security group

Now there are three particular aspects I’d like to call your attention to. First: security groups are semi-stateful because changes made to their rules don’t apply to any in-progress connections. Say that you currently have a rule permitting inbound traffic to port 3579/tcp, and that there are right now five inbound connections to this port. If you delete the rule from the group, the group blocks any new inbound requests to port 3579/tcp but doesn’t terminate the existing five connections. This behavior is intentional; I want to ensure everyone understands this. In all other respects, security groups behave like traditional stateful firewalls.

The second aspect is our terminology for port ranges. This often confuses people new to AWS. The traditional usage of the words “from” and “to” in security-speak describes traffic direction: “from” indicates the source and “to” indicates the destination. This isn’t the case when defining rules for security groups. Instead, security group rules concern themselves only with destination ports; that is, the ports on your instances listening for incoming connections. The “from port” and “to port” in a security group rule indicate the starting and ending port numbers for occasions when you need to define a range of listening ports. In most cases you need to allow only a single port, so the values for “from port” and “to port” will be the same.

This leads to the third aspect I’d like to discuss: how to define traffic sources. The most common method is to specify a protocol along with an individual source IP address, a range of IP addresses using CIDR notation, or the entire Internet (using 0.0.0.0/0). The other way to define a traffic source is to supply the name of some other security group you’ve already created. Here’s the magic jewel for creating three-tier architectures; it’s this capability that answered the person’s question on the webcast.

Defining the security groups for a three-tier architecture

If you’re an API aficionado, you can use these eight simple calls to create the three required security groups to implement this architecture:

ec2-authorize WebSG -P tcp -p 80 -s 0.0.0.0/0
ec2-authorize WebSG -P tcp -p 443 -s 0.0.0.0/0
ec2-authorize WebSG -P tcp -p 22|3389 -s CorpNet

ec2-authorize AppSG -P tcp|udp -p AppPort|AppPort-Range -o WebSG
ec2-authorize AppSG -P tcp -p 22|3389 -s CorpNet

ec2-authorize DBSG -P tcp|udp -p DBPort|DBPort-Range -o AppSG
ec2-authorize DBSG -P tcp -p 22|3389 -s CorpNet
ec2-authorize DBSG -P tcp -p 22|3389 -s VendorNet

Note here the interesting distinction in the parameters used with the commands. If the rule permits a source IP address or range, the parameter is “-s” which indicates source. If the rule permits some other security group, the parameter is “-o” which indicates origin. Neat, huh?

The color coding in the rule list helps you visualize how the rules relate to each other:

  • The first three statements define WebSG, the security group for the web tier. The first two rules in the group permit inbound traffic to destination ports 80/tcp and 443/tcp from any node on the Internet. The third rule in the group permits inbound traffic to management ports (22/tcp for SSH, 3389/tcp for RDP) from the IP address range of your internal corporate network — this is optional, but probably a good idea if you ever need to administer your instances :)
  • The second two statements define AppSG, the security group for the application tier. The second rule in the group permits inbound traffic to management ports from your corpnet. The first rule in the group permits inbound traffic from WebSG — the origin — to the application’s listening port(s).
  • The final three statements define DBSG, the security group for the database tier. The second and third rules in the group permit inbound traffic to management ports from your corpnet and from your database vendors network (required for certain third-party database products). The first rule in the group permits inbound traffic from AppSG — the origin — to the database’s listening port(s).

Of course, not everyone’s a programmer (your humble author included), so here are some screen shots showing how to define these security groups using the AWS Management Console. Please be aware that using the Console produces different results, which I’ll describe in a moment.

WebSG permitting HTTP from the Internet, HTTPS from the Internet, and RDP from our sample corpnet address range:

Three tier - 1 - WebSG

AppSG permitting connections from instances in WebSG and RDP from our sample corpnet address range:

Three tier - 2 - AppSG

DBSG permitting connections from instances in AppSG and RDP from our sample corpnet and vendor address ranges:

Three tier - 3 - DBSG

Important. The AWS APIs and the Management Console behave differently when defining security groups as origins:

  • Management console: When you define a rule using the name of a security group in the “Source (IP or group)” column, you can’t define specific protocols or ports. The console automatically expands your single rule into the three you see: one for all ICMP, one for all TCP, and one for all UDP. If you remove one of them, the console will remove the other two. If you wish to further limit inbound traffic on those instances, feel free to use a software firewall such as iptables or the Windows Firewall.
  • SOAP and Query APIs: With the APIs, rules containing security group origins can include protocol and port specifications. The result is only the rules you define, not the three broad automatic rules like the console creates. This provides you with greater control and reduces potential exposure, so I’d recommend using the APIs rather than the Console. As of now, while the Console correctly displays whatever rules you define with the APIs, please don’t modify API-created rules because the Console’s behavior will override your changes. We’re working to make the Console support the same functionality as the APIs.

More information

The latest API documentation provides details and examples of how to configure rules in security groups. To learn more, please see:

I hope this short tutorial has been useful for you and provides information you can use as you plan migrations to or new implementations in AWS. Over time, I’d like to write more short security and privacy related guides which I’ll post here and in our Security Center. If you have comments or suggestions about content you’d like to see, please let us know. We’re here to make sure you succeed!

> Steve <