AWS News Blog

Assay Depot

Chris_peterson_assay_depotAssay Depot is an online marketplace for pharmaceutical research services. After I saw a mention of EC2 on their blog I sent them a quick note so that I could learn a bit more about what they are doing.

Chris Peterson, Assay Depot’s CIO, was kind enough to take some time away from his sunny San Diego weekend to write up a summary of their approach. Here’s what he had to say, in his own words.

Introduction to Assay Depot

The Assay Depot is trying to improve the way drugs are developed by streamlining the way pharmaceutical companies and biotech’s outsource their research. We have a website that brings hundreds (well 40, but we just launched) research service providers together and makes them accessible to any researcher worldwide. True to our belief that outsourcing breeds efficiency we have outsourced all the infrastructure necessary for our website to Amazon.

Use of Amazon EC2 for Staging

Our website runs on EC2 which makes testing and scaling a breeze. Since we have created our own custom image that is fully configured for our system, we can start a new server at anytime. For instance, if we have a code or database change that seems particularly dangerous, we bring up a new instance for staging. Next we deploy the latest code to it and test to our hearts content. Since the staging server is an exact copy of the production server, we can be sure our code will run the same in both environments.

Assay_depot_how_it_works_2 Use of Amazon EC2 and Ruby on Rails

We have built our system using Ruby on Rails. Lately there has been a lot of talk about Rails scalability, despite the fact that apps like Twitter has shown it can be done. One thing we know is, Rails can easily scale horizontally. That is to say you can spread your application out over many machines. EC2 allows us to take advantage of this, by tweaking our configuration ever so slightly, we can send requests to app servers running on as many different EC2 instances as we wish. This is  the simplest scaling tactic imaginable, but since we just launched last week, its good enough for now and EC2 gives us the flexibility to cheaply experiment with different scaling techniques in the future.


All of our production backups are done to S3. We use s3fs to mount an S3 bucket as a directory on our instance and we use cron jobs to backup our our files and databases. The files are backed up using rsync. The databases are little more interesting, to avoid performance issues on our production database, we have a master slave setup. The master database serves the website and replicates to the slave, and the slave database is used only for backup. We backup the slave database continuously and store snapshots at hourly, daily, weekly and monthly intervals.

Transience Forces Good Process

When I first used EC2, it took a while to get used to the idea that everything is transient. If a server crashes (not that it ever has) everything that isn’t in the original image is lost. This seemed like a drawback, however it has forced us to have very robust and tested backup practices. This has turned out to be an advantage. Now we’re confident that all our data is backed up and secure on Amazon’s servers.


Amazon’s web services has enabled our company to provide world class service to our customers at a fraction of the cost of traditional methods. I wholeheartedly endorse AWS and have been recommending them to people since we started using them.

Thanks, Chris, for taking the time to put that all together. I hope that it gives the readers of this blog some insight into how companies are using our web services to create innovative and flexible new business models.

— Jeff;

Modified 10/24/2020 – In an effort to ensure a great experience, expired links in this post have been updated or removed from the original post.
Jeff Barr

Jeff Barr

Jeff Barr is Chief Evangelist for AWS. He started this blog in 2004 and has been writing posts just about non-stop ever since.