I like to watch long-term technology and business trends and watch as they shape the products and services that I get to use and to write about. As I was preparing to write today’s post, three such trends came to mind:
- Moore’s Law – Coined in 1965, Moore’s Law postulates that the number of transistors on a chip doubles every year.
- Mass Market / Mass Production – Because all of the technologies that we produce, use, and enjoy every day consume vast numbers of chips, there’s a huge market for them.
- Specialization – Due to the previous trend, even niche markets can be large enough to be addressed by purpose-built products.
As the industry pushes forward in accord with these trends, a couple of interesting challenges have surfaced over the past decade or so. Again, here’s a quick list (yes, I do think in bullet points):
- Speed of Light – Even as transistor density increases, the speed of light imposes scaling limits (as computer pioneer Grace Hopper liked to point out, electricity can travel slightly less than 1 foot in a nanosecond).
- Semiconductor Physics – Fundamental limits in the switching time (on/off) of a transistor ultimately determine the minimum achievable cycle time for a CPU.
- Memory Bottlenecks – The well-known von Neumann Bottleneck imposes limits on the value of additional CPU power.
The GPU (Graphics Processing Unit) was born of these trends, and addresses many of the challenges! Processors have reached the upper bound on clock rates, but Moore’s Law gives designers more and more transistors to work with. Those transistors can be used to add more cache and more memory to a traditional architecture, but the von Neumann Bottleneck limits the value of doing so. On the other hand, we now have large markets for specialized hardware (gaming comes to mind as one of the early drivers for GPU consumption). Putting all of this together, the GPU scales out (more processors and parallel banks of memory) instead of up (faster processors and bottlenecked memory). Net-net: the GPU is an effective way to use lots of transistors to provide massive amounts of compute power!
With all of this as background, I would like to tell you about the newest EC2 instance type, the P2. These instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.
New P2 Instance Type
This new instance type incorporates up to 8 NVIDIA Tesla K80 Accelerators, each running a pair of NVIDIA GK210 GPUs. Each GPU provides 12 GB of memory (accessible via 240 GB/second of memory bandwidth), and 2,496 parallel processing cores. They also include ECC memory protection, allowing them to fix single-bit errors and to detect double-bit errors. The combination of ECC memory protection and double precision floating point operations makes these instances a great fit for all of the workloads that I mentioned above.
Here are the instance specs:
|Instance Name||GPU Count||vCPU Count||Memory||Parallel Processing Cores
|p2.large||1||4||61 GiB||2,496||12 GB||High|
|p2.8xlarge||8||32||488 GiB||19,968||96 GB||10 Gigabit|
|p2.16xlarge||16||64||732 GiB||39,936||192 GB||20 Gigabit|
All of the instances are powered by an AWS-Specific version of Intel’s Broadwell processor, running at 2.7 GHz. The p2.16xlarge gives you control over C-states and P-states, and can turbo boost up to 3.0 GHz when running on 1 or 2 cores.
The GPUs support CUDA 7.5 and above, OpenCL 1.2, and the GPU Compute APIs. The GPUs on the p2.8xlarge and the p2.16xlarge are connected via a common PCI fabric. This allows for low-latency, peer to peer GPU to GPU transfers.
All of the instances make use of our new Enhanced Network Adapter (ENA – read Elastic Network Adapter – High Performance Network Interface for Amazon EC2 to learn more) and can, per the table above, support up to 20 Gbps of low-latency networking when used within a Placement Group.
Having a powerful multi-vCPU processor and multiple, well-connected GPUs on a single instance, along with low-latency access to other instances with the same features creates a very impressive hierarchy for scale-out processing:
- One vCPU
- Multiple vCPUs
- One GPU
- Multiple GPUs in an instance
- Multiple GPUs in multiple instances within a Placement Group
P2 instances are VPC only, require the use of 64-bit, HVM-style, EBS-backed AMIs, and you can launch them today in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions as On-Demand Instances, Spot Instances, Reserved Instances, or Dedicated Hosts.
Here’s how I installed the NVIDIA drivers and the CUDA toolkit on my P2 instance, after first creating, formatting, attaching, and mounting (to
/ebs) an EBS volume that had enough room for the CUDA toolkit and the associated samples (10 GiB is more than enough):
$ cd /ebs $ sudo yum update -y $ sudo yum groupinstall -y "Development tools" $ sudo yum install -y kernel-devel-`uname -r` $ wget http://us.download.nvidia.com/XFree86/Linux-x86_64/352.99/NVIDIA-Linux-x86_64-352.99.run $ wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda_7.5.18_linux.run $ chmod +x NVIDIA-Linux-x86_64-352.99.run $ sudo ./NVIDIA-Linux-x86_64-352.99.run $ chmod +x cuda_7.5.18_linux.run $ sudo ./cuda_7.5.18_linux.run # Don't install driver, just install CUDA and sample $ sudo nvidia-smi -pm 1 $ sudo nvidia-smi -acp 0 $ sudo nvidia-smi --auto-boost-permission=0 $ sudo nvidia-smi -ac 2505,875
cuda_7.5.18_linux.run are interactive programs; you need to accept the license agreements, choose some options, and enter some paths. Here’s how I set up the CUDA toolkit and the samples when I ran
P2 and OpenCL in Action
With everything set up, I took this Gist and compiled it on a p2.8xlarge instance:
[ec2-user@ip-10-0-0-242 ~]$ gcc test.c -I /usr/local/cuda/include/ -L /usr/local/cuda-7.5/lib64/ -lOpenCL -o test
Here’s what it reported:
[ec2-user@ip-10-0-0-242 ~]$ ./test 1. Device: Tesla K80 1.1 Hardware version: OpenCL 1.2 CUDA 1.2 Software version: 352.99 1.3 OpenCL C version: OpenCL C 1.2 1.4 Parallel compute units: 13 2. Device: Tesla K80 2.1 Hardware version: OpenCL 1.2 CUDA 2.2 Software version: 352.99 2.3 OpenCL C version: OpenCL C 1.2 2.4 Parallel compute units: 13 3. Device: Tesla K80 3.1 Hardware version: OpenCL 1.2 CUDA 3.2 Software version: 352.99 3.3 OpenCL C version: OpenCL C 1.2 3.4 Parallel compute units: 13 4. Device: Tesla K80 4.1 Hardware version: OpenCL 1.2 CUDA 4.2 Software version: 352.99 4.3 OpenCL C version: OpenCL C 1.2 4.4 Parallel compute units: 13 5. Device: Tesla K80 5.1 Hardware version: OpenCL 1.2 CUDA 5.2 Software version: 352.99 5.3 OpenCL C version: OpenCL C 1.2 5.4 Parallel compute units: 13 6. Device: Tesla K80 6.1 Hardware version: OpenCL 1.2 CUDA 6.2 Software version: 352.99 6.3 OpenCL C version: OpenCL C 1.2 6.4 Parallel compute units: 13 7. Device: Tesla K80 7.1 Hardware version: OpenCL 1.2 CUDA 7.2 Software version: 352.99 7.3 OpenCL C version: OpenCL C 1.2 7.4 Parallel compute units: 13 8. Device: Tesla K80 8.1 Hardware version: OpenCL 1.2 CUDA 8.2 Software version: 352.99 8.3 OpenCL C version: OpenCL C 1.2 8.4 Parallel compute units: 13
As you can see, I have a ridiculous amount of compute power available at my fingertips!
New Deep Learning AMI
As I said at the beginning, these instances are a great fit for machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.
In order to help you to make great use of one or more P2 instances, we are launching a Deep Learning AMI today. Deep learning has the potential to generate predictions (also known as scores or inferences) that are more reliable than those produced by less sophisticated machine learning, at the cost of a most complex and more computationally intensive training process. Fortunately, the newest generations of deep learning tools are able to distribute the training work across multiple GPUs on a single instance as well as across multiple instances each containing multiple GPUs.
The new AMI contains the following frameworks, each installed, configured, and tested against the popular MNIST database:
Caffe – This deep learning framework was designed with expression, speed, and modularity in mind. It was developed at the Berkeley Vision and Learning Center (BVLC) with assistance from many community contributors.
Theano – This Python library allows you define, optimize, and evaluate mathematical expressions that involve multi-dimensional arrays.
TensorFlow™ – This is an open source library for numerical calculation using data flow graphs (each node in the graph represents a mathematical operation; each edge represents multidimensional data communicated between them).
Consult the README file in
~ec2-user/src to learn more about these frameworks.
AMIs from NVIDIA
You may also find the following AMIs to be of interest:
- Windows Server 2012 with the NVIDIA Driver.
- NVIDIA CUDA Toolkit 7.5 on Amazon Linux.
- NVIDIA DIGITS 4 on Ubuntu 14.04.
We launched EC2 Reserved Instances almost eight years ago. The model that we originated in 2009 provides you with two separate benefits: capacity reservations and a significant discount on the use of specific instances in an Availability Zone. Over time, based on customer feedback, we have refined the model and made additional options available including Scheduled Reserved Instances, the ability to Modify Reserved Instances Reservations, and the ability to buy and sell Reserved Instances (RIs) on the Reserved Instance Marketplace.
Today we are enhancing the Reserved Instance model once again. Here’s what we are launching:
Regional Benefit -Many customers have told us that the discount is more important than the capacity reservation, and that they would be willing to trade it for increased flexibility. Starting today, you can choose to waive the capacity reservation associated with Standard RI, run your instance in any AZ in the Region, and have your RI discount automatically applied.
Convertible Reserved Instances -Convertible RIs give you even more flexibility and offer a significant discount (typically 45% compared to On-Demand). They allow you to change the instance family and other parameters associated with a Reserved Instance at any time. For example, you can convert C3 RIs to C4 RIs to take advantage of a newer instance type, or convert C4 RIs to M4 RIs if your application turns out to need more memory. You can also use Convertible RIs to take advantage of EC2 price reductions over time.
Let’s take a closer look…
Reserved Instances (either Standard or Convertible) can now be set to automatically apply across all Availability Zones in a region. The regional benefit automatically applies your RIs to instances across all Availability Zones in a region, broadening the application of your RI discounts. When this benefit is used, capacity is not reserved since the selection of an Availability Zone is required to provide a capacity reservation. In dynamic environments where you frequently launch, use, and then terminate instances this new benefit will expand your options and reduce the amount of time you spend seeking optimal alignment between your RIs and your instances. In horizontally scaled architectures using instances launched via Auto Scaling and connected via Elastic Load Balancing, this new benefit can be of considerable value.
After you click on Purchase Reserved Instances in the AWS Management Console, clicking on Search will display RI’s that have this new benefit:
You can check Only show offerings that reserve capacity if you want to shop for RIs that apply to a single Availability Zone and also reserve capacity:
Perhaps you, like many of our customers, purchase RIs to benefit from the best pricing for their workloads. However, if you don’t have a good understanding of your long-term requirements you may be able to make use of our new Convertible RI. If your needs change, you simply exchange your Convertible Reserved Instances for other ones. You can change into Convertible RIs that have a new instance type, operating system, or tenancy without resetting the term. Also, there’s no fee for making an exchange and you can do so as often as you like.
When you make the exchange, you must acquire new RIs that are of equal or greater value than those you started with; in some cases you’ll need to make a true-up payment in order to balance the books. The exchange process is based on the list value of each Convertible RI; this value is simply the sum of all payments you’ll make over the remaining term of the original RI.
You can shop for a Convertible RI by making sure that the Offering Class to Convertible before clicking on Search:
The Convertible RIs offer capacity assurance, are typically priced at a 45% discount when compared to On-Demand, and are available for all current EC2 instance types on a three year term. All three payment options (No Upfront, Partial Upfront, and All Upfront) are available.
All of the purchasing and exchange options that I described above can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Reserved Instance APIs (
ModifyReservedInstances, and so forth).
Convertible RIs and the regional benefit are available in all public AWS Regions, excluding AWS GovCloud (US) and China (Beijing), which are coming soon.— Jeff;
I would like to extend a very warm welcome to the newest AWS Community Heroes:
- Cyrus Wong
- Paul Duvall
- Vit Niennattrakul
- Habeeb Rahman
- Francisco Edilton
- Jeevan Dongre
The Heroes share their knowledge and demonstrate their enthusiasm for AWS via social media, blog posts, user groups, and workshops. Let’s take a look at their bios to learn more.
Based in Hong Kong, Cyrus is a Data Scientist in the IT Department of the Hong Kong Institute of Vocational Education. He actively promotes the use of AWS at live events and via social media, and has received multiple awards for his AWS-powered Data Science and Machine Learning Projects.
Cyrus provides professional AWS training to students in Hong Kong, with an eye toward certification. One of his most popular blog posts is How to get all AWS Certifications in Asia, where he recommends watching the entire set of re:Invent videos at a 2.0 to 2.5x speedup!
Based in Northern Virginia, he’s an AWS Certified SysOps Administrator and and AWS Certified Solutions Architect, and has been designing, implementing, and managing software and systems for over 20 years. Paul has written over 30 articles on AWS, automation, and DevOps and is currently writing a book on Enterprise DevOps in AWS.
Armed with a Ph.D. in time series data mining and passionate about machine learning, artificial intelligence, and natural language processing, Vit is a consummate entrepreneur who has already founded four companies including Dailitech, an AWS Consulting Partner. They focus on cloud migration and cloud-native applications, and have also created cloud-native solutions for their customers.
Shortly after starting to use AWS in 2013, Vit decided that it could help to drive innovation in Thailand. In order to make this happen, he founded the AWS User Group Thailand and has built it up to over 2,000 members.
Based in India, Habeeb is interested in cognitive science and leadership, and works on application delivery automation at Citrix. Before that, he helped to build AWS-powered SaaS infrastructure at Apigee, and held several engineering roles at Cable & Wireless.
After presenting at AWS community meetups and conferences, Habeen helped to organize the AWS User Group in Bangalore and is actively pursuing his goal of making it the best user group in India for peer learning.
As a self-described “full-time geek,” Francisco likes to study topics related to cloud computing, and is also interested in the stock market, travel, and food. He brings over 15 years of network security and Linux server experience to the table, and is currently deepening his knowledge of AW by learning about serverless computing, and data science.
Francisco works for TDSIS, a Brazilian company that specializes in cloud architecture, software development, and network security, and helps customers of all sizes to make the move to the cloud. On the AWS side, Francisco organizes regular AWS Meetups in São Paulo, Brazil, writes blog posts, and posts code to his GitHub repo.
As a DevOps Engineer based in India, Jeevan has built his career around application development, e-commerce, and product development. His passions include automation, cloud computing, and the management of large-scale web applications.
Back in 2011, Jeevan and several other like-minded people formed the Bengaluru AWS User Group in order to share and develop AWS knowledge and skills. The group is still going strong and Jeevan expects it to become the premier group for peer-to-peer learning.
Please join me in offering a warm welcome to our newest AWS Community Heroes!
My family and my friends love the Amazon Echo in our kitchen! In the past week we have asked for jokes, inquired about the time of the impending sunset, played music, and checked on the time for the next Seattle Seahawks game. Many of our guests already know how to make requests of Alexa. The others learn after hearing an example or two, and quickly take charge.
While Alexa is pretty cool as-is, we are highly confident that it can be a lot cooler. We want our customers to be able to hold lengthy, meaningful conversations with their Alexa-powered devices. Imagine the day when Alexa is as fluent as LCARS, the computer in Star Trek!
In order to advance conversational Artificial Intelligence (AI) a reality, I am happy to announce the first annual Alexa Prize. This is an annual university competition aimed at advancing the field of conversational AI, with Amazon investing up to 2.5 million dollars in the first year.
Teams of university students (each led by a faculty sponsor) can use the Alexa Skills Kit (ASK) to build a “socialbot” that is able to converse with people about popular topics and news events. Participants will have access to a corpus of digital content from multiple sources including the Washington Post, which has agreed to make their corpus available to the students for non-commercial use.
Millions of Alexa customers will initiate conversations with the socialbots on topics ranging from celebrity gossip, scientific breakthroughs, sports, and technology (to name a few). After each conversation concludes Alexa users will provide feedback that will help the students to improve their socialbot. This feedback will also help Amazon to select the socialbots that will advance to the final phase.
Teams have until October 28, 2016 to apply. Up to 10 teams will be sponsored by Amazon and will receive a $100,000 stipend, Alexa-enabled devices, free AWS services, and support from the Alexa team; other teams may also be invited to participate.
On November 14, we’ll announce the selected teams and the competition will begin.
In November 2017, the competition will conclude at AWS re:Invent. At that time, the team behind the best-performing socialbot will be awarded a $500,000 prize, with an additional $1,000,000 awarded to their university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans for 20 minutes.Jeff;
As cloud computing becomes the new normal for organizations all over the world and as our customer base becomes larger and more diverse, we will continue to build and launch additional AWS Regions.
Bonjour la France
I am happy to announce that we will be opening an AWS Region in Paris, France in 2017. The new Region will give AWS partners and customers the ability to run their workloads and store their data in France.
This will be the fourth AWS Region in Europe. We currently have two other Regions in Europe — EU (Ireland) and EU (Frankfurt) and an additional Region in the UK expected to launch in the coming months. Together, these Regions will provide our customers with a total of 10 Availability Zones (AZs) and allow them to architect highly fault tolerant applications while storing their data in the EU.
Today’s announcement means that our global infrastructure now comprises 35 Availability Zones across 13 geographic regions worldwide, with another five AWS Regions (and 12 Availability Zones) in France, Canada, China, Ohio, and the United Kingdom coming online throughout the next year (see the AWS Global Infrastructure page for more info).
As always, we are looking forward to serving new and existing French customers and working with partners across Europe. Of course, the new Region will also be open to existing AWS customers who would like to process and store data in France.
To learn more about the AWS France Region feel free to contact our team in Paris at firstname.lastname@example.org.
Je suis heureux d’annoncer que nous allons ouvrir une nouvelle région AWS à Paris, en France, en 2017. Cette nouvelle région offrira aux partenaires et clients AWS la possibilité de gérer leurs charges de travail et de stocker leurs données en France.
Cette Région sera la quatrième en Europe. Nous avons actuellement deux autres régions en Europe – EU (Irlande) et EU (Francfort) et une région supplémentaire ouvrira dans les prochains mois au Royaume-Uni. Cela portera à dix le total des Zones de Disponibilités (AZ) en Europe permettant aux clients de concevoir des applications tolérantes aux pannes et de stocker leurs données au sein de l’Union Européenne.
Cette annonce signifie que notre infrastructure globale comprend désormais 35 Zones de Disponibilités, réparties sur 13 régions dans le monde et que s’ajoute à cela l’ouverture l’année prochaine de cinq régions AWS (et 12 Zones de Disponibilités) en France, au Canada, en Chine, dans l’Ohio, et au Royaume-Uni (pour plus d’informations, visitez la page d’AWS Global Infrastructure).
Comme toujours, nous sommes impatients de répondre aux besoins de nos clients français, actuels et futurs, et de travailler avec nos partenaires en Europe. Bien entendu, cette nouvelle région sera également disponible pour tous les clients AWS souhaitant traiter et stocker leurs données en France.
Pour en savoir plus sur la région AWS en France, vous pouvez contacter nos équipes à Paris: email@example.com.
After an organization decides to move to the AWS Cloud and to start taking advantage of the benefits that it offers, one of the next steps is to figure out how to properly architect their applications. Having talked to many of them, I know that they are looking for best practices and prescriptive design patterns, along with some ready-made solutions and some higher-level strategic guidance.
To this end, I am pleased to share the new AWS Answers page with you:
Designed to provide you with clear answers to your common questions about architecting, building, and running applications on AWS, the page includes categorized guidance on account, configuration & infrastructure management, logging, migration, mobile apps, networking, security, and web applications. The information originates from well-seasoned AWS architects and is presented in Q&A format. Every contributor to the answers presented on this page has spent time working directly with our customers and their answers reflect the hands-on experience that they have accumulated in the process.
Each answer offers prescriptive guidance in the form of a high-level brief or a fully automated solution that you can deploy using AWS CloudFormation, along with a supporting Implementation Guide that you can view online or download in PDF form. Here are a few to whet your appetite:
How can I Deploy Preconfigured Protections Using AWS WAF? – The solution will set up preconfigured AWS WAF rules and custom components, including a honeypot, in the configuration illustrated on the right.
How do I Automatically Start and stop my Amazon EC2 Instances? – The solution will set up the EC2 Scheduler in order to stop EC2 instances that are not in use, and start them again when they are needed.
What Should I Include in an Amazon Machine Image? This brief provides best practices for creating images and introduces three common AMI designs.
How do I Implement VPN Monitoring on AWS? – The solution will deploy a VPN Monitor and automatically record historical data as a custom CloudWatch metric.
How do I Share a Single VPN Connection with Multiple VPCs? This brief helps you minimize the number of remote connections between multiple Amazon VPC networks and your on-premises infrastructure.— Jeff;
Eighteen (18) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.
New & Notable Open Source
- ecs-refarch-cloudformation is reference architecture for deploying Microservices with Amazon ECS, AWS CloudFormation (YAML), and an Application Load Balancer.
- rclone syncs files and directories to and from S3 and many other cloud storage providers.
- Syncany is an open source cloud storage and filesharing application.
- chalice-transmogrify is an AWS Lambda Python Microservice that transforms arbitrary XML/RSS to JSON.
- amp-validator is a serverless AMP HTML Validator Microservice for AWS Lambda.
- ecs-pilot is a simple tool for managing AWS ECS.
- vman is an object version manager for AWS S3 buckets.
- aws-codedeploy-linux is a demo of how to use CodeDeploy and CodePipeline with AWS.
- autospotting is a tool for automatically replacing EC2 instances in AWS AutoScaling groups with compatible instances requested on the EC2 Spot Market.
- shep is a framework for building APIs using AWS API Gateway and Lambda.
New SlideShare Presentations
- Automated DevOps Workflows with Chef on AWS.
- Rackspace: Best Practices for Security Compliance on AWS.
- Getting Started with AWS Lambda and the Serverless Cloud.
- Introduction to Microservices.
- AWS CloudFormation Best Practices.
- Configuration Management with AWS OpsWorks.
- ClearScale: Continuous Automation with Docker on AWS.
- FireEye: Seamless Visibility and Detection for the Cloud.
- Releasing Software Quickly and Reliably with AWS CodePipeline.
- Running Microservices on AWS Elastic Beanstalk.
- Deep Dive on Microservices and Amazon ECS.
- Managing Your Infrastructure as Code.
- Automating Software Deployments with AWS CodeDeploy.
- Improving Infrastructure Governance on AWS.
- DevOps at Amazon: A Look at Our Tools and Processes.
- AWS Enterprise Summit Netherlands:
- AWS September Webinar Series:
- Test Android and iOS apps on Real Devices with AWS Device Farm.
- Migrate your Data Warehouse to Amazon Redshift.
- Log Analytics with Amazon Elasticsearch Service.
- Real-Time Data Processing Using AWS Lambda.
- Getting Started with Cognito User Pools.
- Monitoring Containers at Scale.
- Deep Dive Amazon Redshift for Big Data Analytics.
New Customer Success Stories
- NetSeer significantly reduces costs, improves the reliability of its real-time ad-bidding cluster, and delivers 100-millisecond response times using AWS. The company offers online solutions that help advertisers and publishers match search queries and web content to relevant ads. NetSeer runs its bidding cluster on AWS, taking advantage of Amazon EC2 Spot Fleet Instances.
- New York Public Library revamped its fractured IT environment—which had older technology and legacy computing—to a modernized platform on AWS. The New York Public Library has been a provider of free books, information, ideas, and education for more than 17 million patrons a year. Using Amazon EC2, Elastic Load Balancer, Amazon RDS and Auto Scaling, NYPL is able to build scalable, repeatable systems quickly at a fraction of the cost.
- MakerBot uses AWS to understand what its customers need, and to go to market faster with new and innovative products. MakerBot is a desktop 3-D printing company with more than 100 thousand customers using its 3-D printers. MakerBot uses Matillion ETL for Amazon Redshift to process data from a variety of sources in a fast and cost-effective way.
- University of Maryland, College Park uses the AWS cloud to create a stable, secure and modern technical environment for its students and staff while ensuring compliance. The University of Maryland is a public research university located in the city of College Park, Maryland, and is the flagship institution of the University System of Maryland. The university uses AWS to migrate all of their datacenters to the cloud, as well as Amazon WorkSpaces to give students access to software anytime, anywhere and with any device.
- September 27 (Webinar) – Automating Compliance Defense in the Cloud.
- September 28 (Webinar) – Addressing Amazon Inspector Assessment Findings.
- September 27-29 (Seoul, Korea) – AWS Korea Monthly Webinars – Cloud Security Special.
- September 28 (London, UK) – Meetup #22 of the AWS User Group UK in London.
- September 28 (Seoul, Korea) – Gaming on AWS Conference.
- September 29 (Ipswich, UK) – Meetup #1 of the new Ipswich AWS User Group.
- September 29 (Cork, Ireland) – AWS AWSome Day.
- September 29 (Cork, Ireland) – Meetup #2 of the AWS User Group Network Meetup in Cork.
- September 29 (Katowice, Poland) – Meetup #17 of the AWS User Group Poland in Katowice.
- September 29 (Sofia, Bulgaria) – Meetup of the AWS Bulgaria User Group in Sofia.
- October 10 (Oslo, Norway) – AWS User Group Norway: Say Hello to Alexa!.
- October 10 (Seoul, Korea) – AWS Partner-led Hands-on Labs.
- October 13 (Seoul, Korea) – AWS Enterprise Summit.
- October 13 (Warsaw, Poland) – Meetup of the Public Cloud User Group in Warsaw, Poland.
- October 14 (Seoul, Korea) – AWS Lambda Zombie Workshop.
- Linux Cloud Engineer at Red Wire Services (100% AWS Role, AWS Advanced Consulting Partner).
- Teridion Sales Engineer (Cloud Optimized Routing for SaaS).
- AWS Careers.
EC2’s M4 instances offer a balance of compute, memory, and networking resources and are a good choice for many different types of applications.
We launched the M4 instances last year (read The New M4 Instance Type to learn more) and gave you a choice of five sizes, from large up to 10xlarge. Today we are expanding the range with the introduction of a new m4.16xlarge with 64 vCPUs and 256 GiB of RAM. Here’s the complete set of specs:
|Instance Name||vCPU Count
||Instance Storage||Network Performance||EBS-Optimized|
|m4.large||2||8 GiB||EBS Only||Moderate||450 Mbps|
|m4.xlarge||4||16 GiB||EBS Only||High||750 Mbps|
|m4.2xlarge||8||32 GiB||EBS Only||High||1,000 Mbps|
|m4.4xlarge||16||64 GiB||EBS Only||High||2,000 Mbps|
|m4.10xlarge||40||160 GiB||EBS Only||10 Gbps||4,000 Mbps|
|m4.16xlarge||64||256 GiB||EBS Only||20 Gbps||10,000 Mbps|
The new instances are based on Intel Xeon E5-2686 v4 (Broadwell) processors that are optimized specifically for EC2. When used with Elastic Network Adapter (ENA) inside of a placement group, the instances can deliver up to 20 Gbps of low-latency network bandwidth. To learn more about the ENA, read my post, Elastic Network Adapter – High Performance Network Interface for Amazon EC2.
Like the m4.10xlarge, the m4.x16large allows you to control the C states to enable higher turbo frequencies when you are using just a few cores. You can also control the P states to lower performance variability (read my extended description in New C4 Instances to learn more about both of these features).
You can purchase On-Demand Instances, Spot Instances, and Reserved Instances; visit the EC2 Pricing page for more information.
As part of today’s launch we are also making the M4 instances available in the China (Beijing), South America (Brazil), and AWS GovCloud (US) regions.
Tina Barr is back with this month’s hot startups on AWS!— Jeff;
It’s officially fall so warm up that hot cider and check out this month’s great AWS-powered startups:
- Funding Circle – The leading online marketplace for business loans.
- Karhoo – A ride comparison app.
- nearbuy – Connecting customers and local merchants across India.
Funding Circle (UK)
Funding Circle is one of the world’s leading direct lending platforms for business loans, where people and organizations can invest in successful small businesses. The platform was established in 2010 by co-founders Samir Desai, James Meekings, and Andrew Mullinger as a direct response to the noncompetitive lending market that exists in the UK. Funding Circle’s goal was to create the infrastructure – similar to a stock exchange or bond market – where any investor could lend to small businesses. With Funding Circle, individuals, financial institutions, and even governments can lend to creditworthy small businesses using an online direct lending platform. Since its inception, Funding Circle has raised $300M in equity capital from the same investors that backed Facebook, Twitter, and Sky. The platform expanded to the US market in October 2013 and launched across Continental Europe in October 2015.
Funding Circle has given businesses the ability to apply online for loans much faster than they could through traditional routes due in part to the absence of high overhead branch costs and legacy IT issues. Their investors include more than 50,000 individuals, the Government-backed British Business Bank, the European Investment Bank, and many local councils and large financial institutions. To date, more than £1.4 billion has been lent through the platform to nearly 16,000 small businesses in the UK alone. Funding Circle’s growth has led independent experts to predict that it will see strong growth in the UK business lending market within a decade. The platform has also made a huge impact in the UK economy – boosting it by £2.7 billion, creating up to 40,000 new jobs, and helping to build more than 2,000 new homes.
As a regulated business, Funding Circle needs separate infrastructure in multiple geographies. AWS provides similar services across all of Funding Circle’s territories. They use the full AWS stack from the top, with Amazon Route 53 directing traffic across global Amazon EC2 instances, to data analytics with Amazon Redshift.
Check out this short video to learn more about how Funding Circle works!
Karhoo (New York)
Daniel Ishag, founder and CEO of Karhoo, found himself in a situation many of us have probably been in. He was in a hotel in California using an app to call a cab from one of the big on-demand services. The driver cancelled. Daniel tried three or four different companies and again, they all cancelled. The very next day he was booking a flight when he saw all of the ways in which travel companies clearly presented airline choices for travelers. Daniel realized that there was great potential to translate this to ground transportation – specifically with taxis and licensed private hire. Within 48 hours of this realization, he was on his way to Bombay to prototype the product.
Karhoo is the global cab comparison and booking app that provides passengers with more choices each time they book a ride. By connecting directly to the fleet dispatch system of established black cab, minicab, and executive car operators, the app allows passengers to choose the ride they want, at the right price with no surge pricing. The vendor-neutral platform also gives passengers the ability to pre-book their rides days or months in advance. With over 500,000 cars on the platform, Karhoo is changing the landscape of the on-demand transport industry.
In order to build a scalable business, Karhoo uses AWS to implement many independent integration projects, run an operation that is data-driven, and experiment with tools and technologies without committing to heavy costs. They utilize Amazon S3 for storage and Amazon EC2, Amazon Redshift, and Amazon RDS for operation. Karhoo also uses Amazon EMR, Amazon ElastiCache, and Amazon SES and is looking into future products such as a mobile device testing farm.
Check out Karhoo’s blog to keep up with their latest news!
nearbuy is India’s first hyper-local online platform that gives consumers and local merchants a place to discover and interact with each other. They help consumers find some of the best deals in food, beauty, health, hotels, and more in over 30 cities in India. Here’s how to use them:
- Explore options and deals at restaurants, spas, gyms, movies, hotels and more around you.
- Buy easily and securely, using credit/debit cards, net-banking, or wallets.
- Enjoy the service by simply showing your voucher on the nearbuy app (iOS and Android).
After continuously observing the amount of time people were spending on their mobile phones, six passionate individuals decided to build a product that allowed for all goods and services in India to be purchased online. nearbuy has been able to make the time gap between purchase and consumption almost instant, make experiences more relevant by offering them at the user’s current location, and allow services such as appointments and payments to be made from the app itself. The nearbuy team is currently charting a path to define how services can and will be bought online in India.
nearbuy chose AWS in order to reduce its time to market while aggressively scaling their operations. They leverage Amazon EC2 heavily and were one of the few companies in the region running their entire production load on EC2. The container-based approach has not only helped nearbuy significantly reduce its infrastructure cost, but has also enabled them to implement CI+CD (Continuous Integration / Continuous Deployment), which has reduced time to ship exponentially.
Stay connected to nearbuy by following them at https://medium.com/@nearbuy.
My colleague Sean Kelly is part of the team that produces the Amazon Linux AMI. He shared the guest post below in order to introduce you to the newest version!— Jeff;
The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.
We offer new major versions of the Amazon Linux AMI after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and we welcome feedback on them.
Launching 2016.09 Today
Today we launching the 2016.09 Amazon Linux AMI, which is supported in all regions and on all current-generation EC2 instance types. The Amazon Linux AMI supports both HVM and PV modes, as well as both EBS-backed and Instance Store-backed AMIs.
You can launch this new version of the AMI in the usual ways. You can also upgrade an existing EC2 instance by running the following commands:
$ sudo yum clean all $ sudo yum update
And then rebooting the instance.
The Amazon Linux AMI’s roadmap is driven in large part by customer requests. We’ve added a number of features in this release in response to these requests and to keep our existing feature set up-to-date:
Nginx 1.10 – Based on numerous customer requests, the Amazon Linux AMI 2016.09 repositories include the latest stable Nginx 1.10 release. You can install or upgrade to the latest version with
sudo yum install nginx.
PostgreSQL 9.5 – Many customers have asked for PostgreSQL 9.5, and it is now available as a separate package from our other PostgreSQL offerings. PostgreSQL 9.5 is available via
sudo yum install postgresql95.
Python 3.5 – Python 3.5, the latest in the Python 3.x series, has been integrated with our existing Python experience and is now available in the Amazon Linux AMI repositories. This includes the associated virtualenv and pip packages, which can be used to install and manage dependencies. The default python version for /usr/bin/python can be managed via alternatives, just like our existing Python packages. Python 3.5 and the associated pip and virtualenv packages can be installed via
sudo yum install python35 python35-virtualenv python35-pip.
Amazon SSM Agent – The Amazon SSM Agent allows you to use Run Command in order to configure and run scripts on your EC2 instances and is now available in the Amazon Linux 2016.09 repositories (read Remotely Manage Your Instances to learn more). Install the agent by running
sudo yum install amazon-ssm-agent and start it with
sudo /sbin/start amazon-ssm-agent.
To learn more about all of the new features of the new Amazon Linux AMI, take a look at the release notes.
— Sean Kelly, Amazon Linux AMI Team
PS – If you would like to work on future versions of the Amazon Linux AMI, check out our Linux jobs!