Category: AWS Partner Solutions Architect (SA) Guest Post


SoftNAS Cloud Version 3.3 Now Available in the AWS Marketplace

The APN is fortunate to have a strong ecosystem of APN Partners that continue to iterate on their products, listening to the needs of customers to deliver products that address legitimate technical needs. SoftNAS is no exception, and the release of SoftNAS Cloud 3.3 brings several substantive improvements to the company’s popular virtual NAS platform.

What is SoftNAS Cloud?

SoftNAS Cloud is a virtual filer appliance that runs on Amazon Elastic Compute Cloud (Amazon EC2), leveraging AWS storage offerings like Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Block Store (Amazon EBS). SoftNAS Cloud offers block and object level storage via common storage protocols like CIFS, NFS, and iSCSI, while providing enterprise-grade features like cross-zone replication and failover. SoftNAS continues to be a popular product in the AWS Marketplace due to its ease of use and feature set.

What’s New in SoftNAS Cloud 3.3?

An important feature of SoftNAS Cloud is the ability to establish a high availability configuration of the Cloud appliance. In this architecture, two SoftNAS instances are deployed across two availability zones, using SNAP-HA to replicate data between the instances. Prior to the release of 3.3, SoftNAS needed to be deployed into a public subnet, as it used Elastic IP Addresses to facilitate this communication. However, version 3.3 enables SNAP-HA via private IP addresses, which means SoftNAS can now be deployed into private subnets. This allows for more flexible deployment options and greater control over access to the appliance. In addition, using private IPs allow for faster failover, as waiting for an EIP to switch instances is no longer necessary.

Another feature of version 3.3 is the option of creating an Amazon S3 Gateway cache, in which new Amazon S3 object writes and recent object reads can be cached locally on SoftNAS cloud. Using this Amazon S3 gateway cache model can improve latency and throughput to your Amazon S3 objects. Using a cache can also help smooth out any variances in network throughput, leading to more consistent performance.

Version 3.3 has also introduced two features with the intention of improving data durability. One such feature is the ability to create Amazon EBS snapshots from within the StorageCenter UI, so SoftNAS can be a more central utility to manage all of your backups. The other feature is Automatic Drive Sparing, allowing for a failed drive to automatically be replaced by a hot spare – this means no manual intervention is required for array rebuilds.

Amazon EBS Snapshots:

SoftNAS Cloud 3.3 also provides more granular monitoring from within the StorageCenter UI; you can now view IOPS:

Lastly, there have been some improvements to the compatibility of SoftNAS, including support for the GovCloud Amazon S3 region, and better integration with Windows “Previous Versions,” including support of snapshots, file restores, and so on.

The APN is built around around APN Partners with a customer obsession, and we’re happy to see SoftNAS updating its product based on customer feedback. If you find yourself interested, take a look at SoftNAS Cloud 3.3 in the AWS Marketplace and try it out today.

 

Deploy High Availability Architectures with the Help of APN Consulting Partners

Guest Post: Kamal Arora, AWS GSI Partner Solutions Architect (SA)

When I work with Amazon Web Services (AWS) Customers and AWS Partner Network (APN) Partners, I often hear that they need to host a highly available workload that adheres to a strict X-9s of availability/uptime. Some companies look to achieve above 3-9s of availability, and we have many customers achieving high levels of availability on AWS and whose uptime has increased on AWS. Take a look at some AWS case studies below:

What Factors Impact Service Availability?

Availability or uptime numbers are frequently misunderstood, as these numbers often refer to application/service availability numbers, which are dependent on multiple other factors beyond just the underlying infrastructure components. If you think about possible reasons for a service downtime, there can be many factors, including: a software upgrade, OS upgrade, file/data corruption, defective programs/application code, user error, and so on. Underlying infrastructure is just a component to consider, but not the only function governing service availability numbers.

AWS Design Recommendations

Below are some of the AWS-specific design recommendations, which you can implement in your AWS architecture:

  • Build redundancy at each layer and avoid single point of failures, including:
    • Instance/Function level – Have multiple instances for each function, including components like a NAT/Web/App/DB servers
    • Storage – Take regular backups to durable storage like Amazon Simple Storage Service (Amazon S3)
    • Networking – Have backup DX/VPN connections
    • Availability Zone (AZ) Level – Utilize multiple AZs within a region (Some of our customers also make use of multi-region architectures for very high availability and regional user base)
  • Externalize data/state to a common store and keep a replica:
  • Use health checks, monitoring and auto-recovery features:
    • Utilize Amazon Route53, Amazon Elastic Load Balancer (Amazon ELB) and Amazon EC2 instance level health/status checks. You could also tie that with Amazon EC2 auto-scaling functionality in case of any particular instance going down and therefore launching a replacement instance. Amazon EC2 now also has auto recovery feature which helps in case of underlying host level failures
    • Enable continuous, detailed Amazon CloudWatch metrics and custom monitoring along-with alerting can tremendously help detect and act on failures in almost real-time. You can also make use of the Amazon CloudWatch Logs feature, in which real-time monitoring of application logs can also be done
  • Optimized App architecture based on micro-services/SOA pattern – Decouple components using services like Amazon Simple Queue Service (Amazon SQS) which make the architecture resilient to individual service level failures
  • Have graceful failure modes – Have a static website being served from Amazon S3/Amazon CloudFront directly as a failover mechanism in case of any issues with your web/application servers serving dynamic content
  • Automate every possible action, including provisioning to updates to tear-down from a single instance to a complete stack. Use services/tools like Amazon CloudFormation, Chef, Ansible, etc. to help with that process
  • Test all possible failure points – It’s very important to test instance, AZ or even region level failures and see if your architecture is able to sustain that or not. To ease through the process, you can utilize Simian Army from Netflix (tools like Chaos Monkey, Chaos Gorilla, Chaos Kong, etc.)
  • To validate your operational readiness and to continue to add more checks with time, you can also refer to the comprehensive AWS Operational Checklist.

How Can APN Consulting Partners Help?

As outlined above, we have many recommendations for designing and deploying high availability architectures on AWS. There are many additional considerations to make when one has to roll-up an application/service availability number, which is where our APN Consulting Partners can help with their managed services, migrations, monitoring, and operations-related expertise and offerings. Here are a few details on the offerings from a few of our Premier APN Consulting Partners:

  • Accenture’s AWS Migration Framework: Helps you with the complex tasks including migration workloads assessment, re-platforming, and actual steady state optimized deployments.
  • Cognizant’s Cloud360 Platform: Helps you see at a glance the status of any running applications, the number of virtual machines in use, the number of instances deployed, how much of each resource is being consumed and much more.
  • Infosys’s Cloud Ecosystem Hub: Helps with the process of deploying and managing enterprise business-critical workloads on AWS.

Another aspect to consider is that you are unlikely to achieve X-9s of availability on day one of your deployment; rather, it is a gradual process as you optimize your deployment, as well as gain more expertise with your operations. It’s an ever-improving iterative cycle to increase your application availability, and our Premier APN Consulting Partners are well equipped in this exercise, given their experience on AWS, as well as their solutions and tool-sets to architect and manage your infrastructure and applications.

To conclude, to optimize your architecture and achieve your desired service availability levels, we recommend that you (a) ‘Design for Failure’ at each level, considering all levels of deployment (b) Assess, automate and optimize your operations, and (c) Continuously Iterate. Finally, don’t hesitate to utilize the expertise of our APN Premier Consulting Partners in this process!

Cloud Deduplication, On-Demand: StorReduce, an APN Technology Partner

Develop. Disrupt. Repeat.

Our goal is to provide our APN Partners with the services, support, and resources they need to provide their end customers with innovative value-added services and solutions on the AWS platform. We love hearing stories about the unique products our APN Technology Partners have developed that integrate with the AWS platform, and today we’re going to tell you about one such product from APN Technology Partner StorReduce.

Our Partner SA team has worked closely with StorReduce, and below we discuss why the StorReduce team chose to work with AWS. We then discuss the company’s success working in working with AWS Customer and fellow APN Techology Partner SpectrumData.

Who is StorReduce?

StorReduce helps enterprises storing unstructured data to Amazon Simple Storage Service (Amazon S3) or Amazon Glacier on AWS to reduce their amount and cost of storage by as much as 50-95 percent. It also offers enterprises a new and more efficient way to migrate backup appliance data and large tape archives to AWS.

StorReduce’s deduplication software runs as an instance on the cloud or as a virtual machine in a datacenter and scales to petabytes of data. The deduplication removes any redundant blocks of data before it is stored and ensures that only one copy of each block is stored.  StorReduce provides throughput of up to 600 MB/s for both reads and writes, and on retrieval adds an additional latency of around 10ms.  StorReduce is suitable to deduplicate most data workloads, including backup, archive, data from mobiles and wearable devices where there is copying of the data, and general unstructured file data.

StorReduce has an Amazon S3 interface, so that any data it deduplicates can seamlessly be used by AWS services such as Amazon Elastic MapReduce (Amazon EMR) for data mining, and Amazon CloudSearch.

See the diagram below to get an idea for how StorReduce works:

StorReduce and AWS

StorReduce chose to work with AWS because of AWS’ extensive range of enterprise cloud services. For instance, storage services like Amazon S3 and Amazon Glacier, and the ecosystem of tools and services that integrate with them, are important for the enterprise workloads with which StorReduce works. The global AWS footprint was another important factor for StorReduce in working with AWS, along with AWS’s commitment to reduce the cost of cloud for our customers.

For the StorReduce team, AWS is a natural choice for enterprises migrating to a public or hybrid cloud environment and for high growth companies born on the cloud. StorReduce chose the Amazon S3 compatible interface because it offers a simple integration point for its customers. The Amazon S3 compatible interface allows any application that communicates with Amazon S3 to take advantage of StorReduce for deduplication without modification. This includes third party products that copy data to and from Amazon S3, as well as AWS services such as Amazon EMR and Amazon CloudSearch.

Who is SpectrumData?

SpectrumData, headquartered in Australia, operates globally and provides migration services for companies to move data from legacy backup data into the AWS Cloud as either restored data sets or as virtual tapes. The company is highly experienced in all aspects of data migration, in particular the restoration, migration, and preservation of digital assets from legacy media and backup formats, redundant, out-dated tape, and recording technologies.

Deduplication to the Cloud – The Challenge

SpectrumData needed to migrate its clients’ petabyte scale tape archives (tens of thousands of tapes) to Amazon S3 and Amazon Glacier storage solutions. To reduce the cost of storage and the bandwidth required to transfer the tape data to AWS, SpectrumData chose to deduplicate the data. Tape archives generally contain multiple copies of the same data sets, which can be reduced down to a single copy with deduplication. This has the potential to reduce the amount of data stored down to between ½ to 1/20th.

According to Guy Holmes, Director of SpectrumData, “It is difficult to migrate large tape archives to the cloud using existing on-premise deduplication offerings because they do not scale.  We can only put four tapes at a time through their hardware before we start to see a bottleneck forming. In order to upload large tape archives to the cloud in weeks not years, we need to put hundreds of tapes at a time through the hardware 24 hours per day.”

Why StorReduce

For tape migration, StorReduce’s software can be installed on-premises for a CAPEX-free, very fast migration of an enterprise’s large tape archives and backup appliance data onto the AWS Cloud. Installing StorReduce on-premise minimizes bandwidth during the transfer. See below:

After the transfer is completed, the on-premises StorReduce software can be removed and re-instated in the cloud:

The Benefits of Working with StorReduce and AWS

For SpectrumData, the global footprint of AWS made working with AWS a natural choice. The AWS footprint allows SpectrumData to store data in close proximity to its customers no matter where they are in the world. This improves performance by reducing latency and allows SpectrumData and its customers to comply with data sovereignty laws. Another reason the company decided to work with AWS is the pay-as-you-go pricing model embraced by AWS. SpectrumData pays for exactly the resources they use, and there’s no need to estimate capacity or to make an upfront investment.

After SpectrumData was introduced to StorReduce by AWS, Holmes believed that it could overcome his current challenges with their on-premise deduplication hardware.

SpectrumData conducted a proof of concept with StorReduce, which performed the same tests on the same data that they had previously performed with a leading global deduplication hardware vendor. Holmes confirmed, “We’re delighted with StorReduce’s performance. The software deduplicates 24/7 and is more scalable than the hardware appliances we tested.  These factors help us to achieve the necessary throughput for our clients. It also showed deduplication ratios trending to over 95 percent, which is equal to the leading global deduplication offerings we have tested.”

StorReduce enables SpectrumData to migrate large tape archives to AWS far more efficiently than the hardware appliances that were tested, reducing years of work to weeks.

Additional benefits:

  • StorReduce can reduce or remove CAPEX that would otherwise need to be spent on deduplication hardware.
  • With StorReduce, once the tape data has been migrated to the cloud it is seamlessly accessible by Amazon S3 API. Therefore any existing AWS services like Amazon CloudSearch and Amazon EMR can easily access that data. This configuration is challenging with on-premises deduplication offerings.
  • As the client’s data grows, StorReduce can quickly scale to meet their needs with no need to buy additional hardware.

Holmes concludes, “Working with StorReduce and AWS makes my business work.”

To learn more about how AWS can help with your storage and backup needs, visit our Storage and Backup details page: https://aws.amazon.com/backup-storage/.

Try StorReduce on AWS Marketplace now with one click to see how much you could save.  To learn more about how StorReduce can migrate your tape archive or backup appliance data to the AWS Cloud, click here.

 

Getting Started with Ansible and Dynamic Amazon EC2 Inventory Management

Guest post by Brandon Chavis, AWS Partner Solutions Architect

Today, the options for configuration and orchestration management seem nearly endless, making it daunting to find a tool that works well for you and your organization. Here at AWS, we think Ansible, an APN Technology Partner, provides a good option for configuration management due to its simplicity, agentless architecture, and ability to interact easily with your ever-changing, scaling, and dynamic AWS architecture.

Instead of having to push an agent to every new instance you launch via userdata, roll an agent into an AMI, or engage in similarly management-intensive deployments of your config management software, the Ansible framework allows administrators to run commands against Amazon Elastic Compute Cloud (Amazon EC2) instances as soon as they are available, all over SSH. This document intends to examine ways that your Amazon EC2 inventory can be managed with minimal effort, despite your constantly changing fleet of instances.

This post assumes you already have Ansible installed on either your workstation or an Amazon EC2 instance – Ansible has great documentation for installation (http://docs.ansible.com/intro_installation.html) and getting started (http://docs.ansible.com/intro_getting_started.html).

I’ve chosen to use a RHEL7 Amazon EC2 instance for my Ansible “master”. The reason I’ve done this is first, for convenience, and also, because the dynamic Amazon EC2 inventory script Ansible provides runs on top of Boto. This is significant because Boto can automatically source my AWS API credentials provided by an Amazon EC2 Identity and Access Management (IAM) role (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html). I’ve given my Amazon EC2 role Power User privileges, for simplicity – you may choose to lock this down further.

Should you decide to run Ansible on your workstation, you’ll need to set environment variables for your Secret and Access key:

$ export AWS_ACCESS_KEY_ID='YOUR_AWS_API_KEY'
$ export AWS_SECRET_ACCESS_KEY='YOUR_AWS_API_SECRET_KEY'

To get started with dynamic inventory management, you’ll need to grab the EC2.py script and the EC2.ini config file. The EC2.py script is written using the Boto EC2 library and will query AWS for your running Amazon EC2 instances. The EC2.ini file is the config file for EC2.py, and can be used to limit the scope of Ansible’s reach. You can specify the regions, instance tags, or roles that the EC2.py script will find. Personally, I’ve scoped Ansible to just look at the US-West-2 region.

https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini

* Linked from this Ansible documentation: http://docs.ansible.com/ansible/intro_dynamic_inventory.html#example-aws-ec2-external-inventory-script

Use Wget/Curl/Git to pull those files down into the directory /etc/ansible/. I had to create this directory myself because I installed Ansible through pip.

Now you’ll need to set a few more environment variables for the inventory management script –

$ export ANSIBLE_HOSTS=/etc/ansible/ec2.py This tells Ansible to use the dynamic EC2 script instead of a static /etc/ansible/hosts file.

Open up ec2.py in a text editor and make sure that the path to the ec2.ini config file is defined correctly at the top of the script:

$ export EC2_INI_PATH=/etc/ansible/ec2.ini This tells ec2.py where the ec2.ini config file is located.

As a quick aside, I’ll address SSH connectivity. The Ansible documentation covers this fairly comprehensively (http://docs.ansible.com/intro_getting_started.html), but we’ll cover this briefly to get up and running. There are several options for how you’d like to handle authentication. Passwords, while not recommended, are allowed, and you can also use an SSH agent for credential forwarding.

Using an SSH agent is the best way to authenticate with your end nodes, as this alleviates the need to copy your .pem files around. To add an agent, do


$ ssh-agent bash 
$ ssh-add ~/.ssh/keypair.pem 

More on SSH-agents: https://developer.github.com/guides/using-ssh-agent-forwarding/

At this stage, you should be ready to communicate with your instances. Here’s your mid-blog post checklist:

-Ansible is installed and has access to your Secret and Access key (via EC2 role or environment variable)
-Ec2.py and Ec2.ini inventory scripts are downloaded and configured
-ANSIBLE_HOSTS environment variable set
-Ansible.cfg exists
-SSH agent is running (You can check with “ssh-add -L”)

Now we’re ready to see Ansible shine. You can run a command or playbook against any collection instances based on common Amazon EC2 instance variables. These variables are listed here: http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance

If you call the Amazon EC2 inventory script directly, you’ll see your Amazon EC2 inventory broken down and grouped by a variety of factors. To try this, run $ /etc/ansible/ec2.py --list

For a useful example of leveraging the Amazon EC2 instance variables, I currently have two instances with the tag “Ansible Slave” applied to them. Let’s ping them real quickly.


$ ansible -m ping tag_Ansible_Slave
10.1.2.137 | success >> {
    "changed": false, 
    "ping": "pong"
}

10.1.2.136 | success >> {
    "changed": false, 
    "ping": "pong"
}

Now, I’ll manually launch another instance from the Amazon EC2 console using the “Launch more like this” option, for simplicity. While this can quickly be done through Ansible, that’s another blog post!

 

Five minutes later, I can run the command again –


$ ansible -m ping tag_Ansible_Slave
The authenticity of host '10.1.2.193 (10.1.2.193)' can't be established.
ECDSA key fingerprint is a7:6c:44:ef:dc:04:68:64:38:be:60:d8:d0:f7:2c:e0.
Are you sure you want to continue connecting (yes/no)? yes
10.1.2.193 | success >> {
    "changed": false, 
    "ping": "pong"
}

10.1.2.136 | success >> {
    "changed": false, 
    "ping": "pong"
}

10.1.2.137 | success >> {
    "changed": false, 
    "ping": "pong"

In this example, I was prompted to accept the new SSH key for my new host. Clearly this won’t scale well, so in /etc/ansible/ansible.cfg, you can simply uncomment the line “host_key_checking = False” to prevent needing any manual input to connect to new hosts.

I can easily query by another variable- let’s ping all instances that use my “ansible_slaves” .pem file:


$ ansible -m ping key_ansible_slaves
10.1.2.193 | success >> {
    "changed": false, 
    "ping": "pong"
}

10.1.2.136 | success >> {
    "changed": false, 
    "ping": "pong"
}

10.1.2.137 | success >> {
    "changed": false, 
    "ping": "pong"
}

Now, we need to update all of my instances tagged Ansible Slave, because there is a critical security patch needed – we can simply use the Ansible yum module (http://docs.ansible.com/yum_module.html) to upgrade all packages: $ ansible -s -m yum -a "name=* state=latest" tag_Ansible_Slave

I’ve snipped the output from this for brevity – Ansible can be very verbose when running “yum update” across several instances.

Similarly, I can install Nginx on my Ansible_Slave tagged instances. Here, I’m providing a URL to the Nginx .rpm, since it isn’t in the default RHEL7 Repos.


$ ansible -s -m yum -a 
"name=http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-
centos-7-0.el7.ngx.noarch.rpm state=present" tag_Ansible_Slave
10.1.2.193 | success >> {
    "changed": true,
    "msg": "",
    "rc": 0,
    "results": [
        "Loaded plugins: amazon-id, rhui-lb\nExamining /var/tmp/yum-
root-2WRpGo/nginx-release-centos-7-0.el7.ngx.noarch.rpm: nginx-release-
centos-7-0.el7.ngx.noarch\nMarking /var/tmp/yum-root-2WRpGo/nginx-
release-centos-7-0.el7.ngx.noarch.rpm to be installed\nResolving 
Dependencies\n--> Running transaction check\n---> Package nginx-
release-centos.noarch 0:7-0.el7.ngx will be installed\n--> Finished 
Dependency Resolution\n\nDependencies 
Resolved\n\n==============================================================
==================\n
Package Arch Version Repository 
   Size\n==============================================================
==================\nInstalling:\n nginx-release-
centos\n              noarch 7-0.el7.ngx /nginx-release-centos-7-
0.el7.ngx.noarch 1.5 k\n\nTransaction 
Summary\n==============================================================
==================\nInstall 1 Package\n\nTotal size: 1.5 k\nInstalled 
size: 1.5 k\nDownloading packages:\nRunning transaction check\nRunning 
transaction test\nTransaction test succeeded\nRunning 
transaction\n Installing : nginx-release-centos-7-
0.el7.ngx.noarch                     1/1 \n Verifying : nginx-
release-centos-7-0.el7.ngx.noarch                   1/1 
\n\nInstalled:\n nginx-release-centos.noarch 0:7-
0.el7.ngx                                    \n\nComplete!\n"
    ]
}

10.1.2.137 | success >> {
    "changed": true,
    "msg": "",
    "rc": 0,
    "results": [
        "Loaded plugins: amazon-id, rhui-lb\nExamining /var/tmp/yum-
root-nU1W_b/nginx-release-centos-7-0.el7.ngx.noarch.rpm: nginx-release-
centos-7-0.el7.ngx.noarch\nMarking /var/tmp/yum-root-nU1W_b/nginx-
release-centos-7-0.el7.ngx.noarch.rpm to be installed\nResolving 
Dependencies\n--> Running transaction check\n---> Package nginx-
release-centos.noarch 0:7-0.el7.ngx will be installed\n--> Finished 
Dependency Resolution\n\nDependencies 
Resolved\n\n==============================================================
==================\n 
Package Arch Version Repository 
   Size\n==============================================================
==================\nInstalling:\n nginx-release-
centos\n              noarch 7-0.el7.ngx /nginx-release-centos-7-
0.el7.ngx.noarch 1.5 k\n\nTransaction 
Summary\n==============================================================
==================\nInstall 1 Package\n\nTotal size: 1.5 k\nInstalled 
size: 1.5 k\nDownloading packages:\nRunning transaction check\nRunning 
transaction test\nTransaction test succeeded\nRunning 
transaction\n Installing : nginx-release-centos-7-
0.el7.ngx.noarch                     1/1 \n Verifying : nginx-
release-centos-7-0.el7.ngx.noarch                   1/1 
\n\nInstalled:\n nginx-release-centos.noarch 0:7-
0.el7.ngx                                    \n\nComplete!\n"
    ]
}

10.1.2.136 | success >> {
    "changed": true,
    "msg": "",
    "rc": 0,
    "results": [
        "Loaded plugins: amazon-id, rhui-lb\nExamining /var/tmp/yum-
root-PszTju/nginx-release-centos-7-0.el7.ngx.noarch.rpm: nginx-release-
centos-7-0.el7.ngx.noarch\nMarking /var/tmp/yum-root-PszTju/nginx-
release-centos-7-0.el7.ngx.noarch.rpm to be installed\nResolving 
Dependencies\n--> Running transaction check\n---> Package nginx-
release-centos.noarch 0:7-0.el7.ngx will be installed\n--> Finished 
Dependency Resolution\n\nDependencies 
Resolved\n\n==============================================================
==================\n 
Package Arch Version Repository 
   Size\n==============================================================
==================\nInstalling:\n nginx-release-
centos\n              noarch 7-0.el7.ngx /nginx-release-centos-7-
0.el7.ngx.noarch 1.5 k\n\nTransaction 
Summary\n==============================================================
==================\nInstall 1 Package\n\nTotal size: 1.5 k\nInstalled 
size: 1.5 k\nDownloading packages:\nRunning transaction check\nRunning 
transaction test\nTransaction test succeeded\nRunning 
transaction\n Installing : nginx-release-centos-7-
0.el7.ngx.noarch                     1/1 \n Verifying : nginx-
release-centos-7-0.el7.ngx.noarch                    1/1 
\n\nInstalled:\n nginx-release-centos.noarch 0:7-
0.el7.ngx                                    \n\nComplete!\n"
    ]
}

And we’re done!

I’d be remiss to leave out the playbook running abilities of Ansible in this blog post. Playbooks allow you to create repeatable routines of tasks in a single .yml file. Above, we update all of the packages on my instances and installed Nginx with ad-hoc commands; here we’ll condense this into a single command by running a playbook to my instances. Let’s take a look at a simple playbook to do both of those things:


---
- hosts: tag_Ansible_Slave
user: ec2-user
sudo: yes
tasks:
- name: Update all packages to latest
yum: name=* state=latest

- name: Install specific nginx package for centos 7
yum: name='http://nginx.org/packages/centos/7/noarch/RPMS/nginx-\
release-centos-7-0.el7.ngx.noarch.rpm' state=present

We simply add the code above into a “playbook.yml” file on the Ansible master and run it with the “ansible-playbook playbook.yml” command. See here for more info on playbooks: http://docs.ansible.com/playbooks_intro.html


$ ansible-playbook playbook.yml

PLAY [tag_Ansible_Slave] ******************************************************

GATHERING FACTS ***************************************************************
ok: [10.1.2.193]
ok: [10.1.2.137]
ok: [10.1.2.136]

TASK: [Update all packages to latest] *****************************************
changed: [10.1.2.136]
changed: [10.1.2.193]
changed: [10.1.2.137]

TASK: [Install specific nginx package for centos 7] ***************************
ok: [10.1.2.193]
ok: [10.1.2.136]
ok: [10.1.2.137]

PLAY RECAP ********************************************************************
10.1.2.136 : ok=3 changed=1 unreachable=0 failed=0
10.1.2.137 : ok=3 changed=1 unreachable=0 failed=0
10.1.2.193 : ok=3 changed=1 unreachable=0 failed=0

As you can see, there are a lot of possibilities with Ansible and Amazon EC2. Group instances by common attributes, tags, security groups, and so on, and you’ll be able to administer all of your machines with just a few commands.

To get started yourself, take a look at the Ansible installation documentation (http://docs.ansible.com/intro_installation.html) and grab the Amazon EC2 inventory python script and ini config file. The steps in this post can get you to a functional baseline, at which point you’ll be able to move toward becoming an Amazon EC2-orchestrating and playbook-writing Ansible wizard.

To learn more about Ansible on AWS, click here.