AWS Official Blog

  • In the Works – AWS Region in India

    by Jeff Barr | on | in Announcements | | Comments

    We seem to have AWS customers in just about every corner of the world (190 countries at last count). We have offices in many countries, localized content on the AWS web site, and a broad array of certifications and accreditations to give those customers the assurances that they can run many different types of workloads on the AWS Cloud.

    Coming to India
    As part of our AWS Enterprise Summits in India, we have announced our intent to open an AWS Infrastructure Region there in 2016. With tens of thousands of customers in India already making great use of AWS to drive cost savings, accelerate innovation, speed time to market and expand their geographic reach, I am confident that this new region will become a great home for startups, small-to-medium businesses, enterprises, and the public sector.

    AWS customers (and their users) in the region are already taking advantage of our Amazon Route 53 and Amazon CloudFront edge locations in Chennai and Mumbai. Other edge locations that serve the general area in Asia are located in Hong Kong, China, Manila, the Philippines, Australia (Melbourne and Sydney), Japan (Tokyo and Osaka), Korea, Singapore, and Taiwan.

    AWS Customers in India
    While I’ve got your ear I’d like to tell you about several AWS customers in India.

    Tata Motors Limited is a leading Indian multinational automotive manufacturing company headquartered in Mumbai, and is part of Tata Group. The company’s customer portals and its Telematics systems, which lets fleet owners monitor all the vehicles in their fleet on a real time basis, are running on AWS. Tata Motors recently built a parts planning system to forecast spares demand by using ordering and inventory patterns. They use AWS for development landscapes immediately after the project kicks off, which shaves four to six weeks of setup time in a typical project cycle.  Jagdish Belwal (Chief Information Officer of Tata Motors) told us:

    Whenever we plan on rolling out a new project or experimenting with a new technology, AWS helps us in quickly provisioning the required infrastructure and enables us in getting up and running at a fast pace. AWS has helped us become more agile and has drastically increased our speed of experimentation and therefore, innovation.

    To learn more, watch the Tata Motors AWS Case Study.


    NDTV is India’s leading media house with TV channels watched by millions of people across the world.  They have been using AWS since 2009 to run their video platform and to host all of their web properties.  During the May 2014 general election, AWS helped NDTV to handle an unprecedented level of web traffic, scaling 26x from 500 million hits on a normal day to 13 billion hits during election day, and regularly peaked at 400,000 hits per second. Kawaljit Singh (CTO of NDTV Convergence) told us:

    We have been an early adopter of AWS and the benefits that we experience is beyond just cost savings, it is the agility that enables us to move fast with new projects that makes a positive impact and real difference to our business. We are very impressed with the staff and tech support teams of AWS, who have been most helpful in providing support and guidance throughout our cloud journey. They worked hand-in-hand with our team so that we are able to handle the massive scale and unpredictability of workloads for the general election event last year, and as a result, the entire process took place without any hitch at all.

    To learn more about their record-setting traffic on election day and their cloud journey, read CTO Perspectives I: Building a Media Empire from Scratch and CTO Perspectives II: Handling 13 Billion Hits a Day.


    Ferns N Petals is a leading flower and retailer in India with 194 outlets in 74 cities and delivery across 156 countries worldwide. Before using AWS, Ferns N Petals was running its IT infrastructure in a traditional data center.  They turned to AWS in the year 2014 when their business grew rapidly and decided to move their entire online business to the AWS Cloud.  Since moving to AWS, they are able to manage traffic that grows by 80 percent during the festive seasons. Manish Saini (Vice President of online business) had the following to say:

    Our experience with AWS over the past year has been excellent.  AWS is now the cornerstone in our growth strategy. We have recently launched two new businesses that include new overseas expansion that are all running on AWS. We are now able to spend more time and resources in areas that matter to our customers such as new mobile app development that will enhance their buying experience.

    To learn more about how they use AWS, read Blossoming in the Cloud.


    Novi Digital is a wholly owned subsidiary of STAR India, one of the largest media and entertainment companies in India. The company uses AWS to run hotstar, a flagship OTT platform for drama, movies and live sporting events. With more than 20 million downloads in four months, hotstar has seen one of the fastest adoptions of any new digital service anywhere in the world. In fact, during one of the Cricket World Cup matches, hotstar and starsports.com combined reached a record total of over 2.3 million concurrent streams and more than 50 million video views. Ajit Mohan (Head of Digital, STAR India) had the following to say:

    The reliability of the highly scalable AWS cloud platform has enabled hotstar to break many records in the last four months. AWS has been a key partner in helping us deliver a compelling and seamless experience for millions of users.

    You can read StarSports.com: YouTube of Sports in India, to learn more.


    Stay Tuned
    I’ll have more information on the new region as we get closer to launch time.

    If you are already an AWS developer, you probably know how to take advantage of new regions already. If you are not an AWS developer, why not sign up now and take advantage of the AWS Free Tier?

    Jeff;

  • New – AWS Budgets and Forecasts

    by Jeff Barr | on | in Cost Explorer | | Comments

    The dynamic, pay-as-you-go nature of the AWS Cloud gives you the opportunity to build systems that respond gracefully to changes in load while paying only for the compute, storage, network, database, and other resources that you actually consume.

    Over the last couple of years, as our customer base has become increasingly sophisticated and cloud-aware, we have been working to provide equally sophisticated tools for viewing and managing costs. Many enterprises use AWS for multiple projects, often spread across multiple departments and billed directly or through linked accounts.

    In the usual budget-centric environment found in an enterprise, no one likes a surprise (except if it is an AWS price reduction). Our goal is to give you a broad array of cost management tools that will provide you with the information that you need to have in order to know what you are currently spending and how much you can expect to spend in the future. We also want to make sure that you have an early warning if costs exceed your expectations for some reason.

    We launched the Cost Explorer last year. This tool integrates with the AWS Billing Console and gives you reporting, analytics, and visualization tools to help you to track and manage your AWS costs.

    New Budgets and Forecasts
    Today we are adding support for budgets and forecasts. You can now define and track budgets for your AWS costs, forecast your AWS costs for up to three months out, and choose to receive email notification when actual costs exceed or are forecast to exceed budget costs.

    Budgeting and forecasting takes place on a fine-grained basis, with filtering or customization based on Availability Zone, Linked Account, API operation, Purchase Option (e.g. Reserved), Service, and Tag.

    The operations provided by these new tools replace the tedious and time-consuming manual calculations that many of our customers (both large and small) have been performing as part of their cost management and budgeting process. After running a private beta with over a dozen large-scale AWS customers, we are confident that these tools will help you to do an even better job of understanding and managing your costs.

    Let’s take a closer look at these new features!

    New Budgets
    You can now set monthly budgets around AWS costs, customized by multiple dimensions including tags. For example, you could create budgets to track EC2, RDS, and S3 costs separately for each active development effort.

    The AWS Management Console will list each of your budgets (you can also filter by name):

    Here’s how you create a new budget. As you can see, you can choose to include costs related to any desired list of AWS services:

    You can set alarms that will trigger based on actual or forecast costs, with email notification to a designated individual or group. These alarms make use of Amazon CloudWatch but are somewhat more abstract in order to better meet the needs of your business and accounting folks. You can create multiple alarms for each budget. Perhaps you want one alarm to trigger when actual costs exceed 80% of budget costs and another when forecast costs exceed budgeted costs.

    You can also view variances (budgeted vs. actual) in the console. Here’s an example:

    New Forecasts
    Many AWS teams use an internal algorithm to predict demand for their offerings. They use the results to help them to allocate development and operational resources, plan and execute marketing campaigns, and more. Our new budget forecasting tool makes use of the same algorithm to present you with costs estimates that include both 80% and 95% confidence interval ranges.

    As is the case with budgets, you can filter forecasts on a wide variety of dimensions. You can create multiple forecasts and you can view them in the context of historical costs.

    After you create a forecast, you can view it as a line chart or as a bar chart:

    As you can see from the screen shots, the forecast, budget, and confident intervals are all clearly visible:

    These new features are available now and you can start using them today!

    Jeff;

  • AWS Week in Review – June 22, 2015

    by Jeff Barr | on | in Week in Review | | Comments

    Let’s take a quick look at what happened in AWS-land last week:

    Monday, June 22
    Tuesday, June 23
    Wednesday, June 24
    Thursday, June 25
    Friday, June 26

    New & Notable Open Source Packages

      • Dyn53 is a Dynamic DNS Client for Route 53 for use in environments with dynamic external IP address assignment, such as with home ADSL connections.
      • Aloisius helps you to manage the life-cycle of AWS CloudFormation Stacks.
      • Legacy is a utility for uploading Cassandra snapshots and incremental backups to S3.
      • Lambada is a more passionate way to create AWS Lambda functions using Clojure.

    Upcoming Events

    Upcoming Events at the AWS Loft (San Francisco)

    Upcoming Events at the AWS Loft (New York)

        • June 29 – Chartbeat (6:30 PM).
        • June 30 – Picking the Right Tool for the Job (HTML5 vs. Unity) (Noon – 1 PM).
        • June 30 – So You Want to Build a Mobile Game? (1 PM – 4:30 PM).
        • June 30 – Buzzfeed (6:30 PM).
        • July 6 – AWS Bootcamp (10 AM – 6 PM).
        • July 7 – Dr. Werner Vogels (Amazon CTO) + Startup Founders (6:30 PM).
        • July 7 – AWS Bootcamp (10 AM – 6 PM).
        • July 8 – Sumo Logic Panel and Networking Event (6:30 PM).
        • July 9- AWS Activate Social Event (7:00 PM – 10 PM).
        • July 10 – Getting Started with Amazon EMR (Noon – 1 PM).
        • July 10 – Amazon EMR Deep Dive (1 PM – 2 PM).
        • July 10 – How to Build ETL Workflows Using AWS Data Pipeline and EMR (2 – 3 PM).
        • July 14 – Chef Bootcamp (10 AM – 6 PM).
        • July 15 – Chef Bootcamp (10 AM – 6 PM).
        • July 16 – Science Logic (11 AM – Noon).
        • July 16 – Intel Lustre (4 PM – 5 PM).
        • July 17 – Chef Bootcamp (10 AM – 6 PM).
        • July 22 – Mashery (11 AM – 3 PM).
        • July 23 – An Evening with Chef (6:30 PM).
        • July 29 – Evident.io (6:30 PM).
        • August 5 – Startup Pitch Event and Summer Social (6:30 PM).
        • August 25 – Eliot Horowitz, CTO and Co-Founder of MongoDB (6:30 PM).
        • AWS Summits.

    Help Wanted

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    Jeff;

  • EC2’s R3 Instances Now Available in Brazil

    by Jeff Barr | on | in Amazon EC2 | | Comments

    EC2’s R3 instances are designed to provide you with the best price per GiB of RAM, along with high memory performance. I am happy to be able to announce that they are now available in the South America (Brazil) region, in two sizes.

    Here are the specs:

    Instance Name vCPU Count
    RAM
    SSD Storage Hourly On-Demand (Linux)
    RI UpFront (Linux, 3 Year)
    RI Price / Hour (Linux, 3 Year)
    r3.4xlarge 16 122 GiB 1 x 320 $2.946 $17,345 $0.660
    r3.8xlarge 32 244 GiB 2 x 320 $5.892 $34,690 $1.320

    Here are some of the other notable features and characteristics of these instances:

    • Intel Xeon (Ivy Bridge) processors.
    • Support for Enhanced Networking for lower latency, low jitter, and high packet per second performance.
    • Sustained memory bandwidth of up to 63 GBps.
    • Fast I/O performance – up to 150,000 4 KB random reads per second.

    You can use these instances for in-memory analytics (SAP HANA springs to mind), high performance relational and NoSQL databases, data warehouses, and memory-resident caches.

    The r3.4xlarge instances can also be launched in EBS-Optimized form. Both instances support Hardware Virtualization (HVM) AMIs only; see the R3 Technical Documentation for more information.

    The instances are available today in On-Demand, Reserved, and Spot form.

    — Jeff;

  • AWS Public Sector Update – City on a Cloud and More

    by Jeff Barr | on | in AWS GovCloud (US), Public Data Sets | | Comments

    Earlier today we opened the 6th annual AWS Government, Education, and Nonprofits Symposium in Washington, DC. As part of the event we announced another City on a Cloud Challenge, an upcoming AWS Public Data Set, and some information about the overall usage and growth of AWS in this space.

    City on a Cloud Challenge
    We are now looking for entries for the second City on a Cloud Challenge! With awards totaling $250,000 in AWS credits, this program is designed to recognize local and regional governments (along with developers) that are pushing forward with the cloud in innovative ways.

    Entries must use (or propose the use of) AWS. Prizes will be awarded to eight grand prize winners in three categories (Best Practices, Partners in Innovation, and Dream Big). Entries must be received by August 21, 2015 so that we can choose the finalists in September and announce the winners at AWS re:Invent.

    Winners of the 2014 City on a Cloud Challenge included:

    • Sustainable Streets (New York City DOT)
    • Disaster Recovery (City of Asheville, North Carolina)
    • Smart Airport Experience (London City Airport)
    • City mapping (City and County of San Francisco)
    • Crime and risk mapping (Hunchlab)
    • N_Sight IQ (Neptune Technology Group)
    • ePropertyPlus inventory management
    • DKAN open data platform (Nucivic)

    New AWS Public Data Set – NEXRAD (Coming Soon)
    The Next Generation Weather Radar (NEXRAD) is a network of 160 high-resolution Doppler radar sites throughout the United States and select overseas locations whose data is managed by the National Oceanic and Atmospheric Administration (NOAA). NEXRAD detects precipitation and atmospheric movement and disseminates data in 5 minute intervals from each site. As part of the NOAA Big Data Project, AWS will be making NEXRAD data freely available on Amazon S3. I’ll share more information (via a blog post or Twitter) as soon as I get it.

    AWS Usage and Growth
    Our customers are using AWS to run their classrooms, schools, departments, agencies, and research projects. Here are some of numbers that we announced at the symposium:

    • 4,500 educational institutions use AWS.
    • 1,700 government agencies use AWS.
    • 17,000 non-profit organizations use AWS.

    AWS GovCloud (US) is an isolated AWS region used by US government agencies and customers to host sensitive workloads in the cloud. On a year over year basis, the number of customers for this region has grown by 273%.

    Jeff;

  • New – Alexa Skills Kit, Alexa Voice Service, Alexa Fund

    by Jeff Barr | on | in Alexa, AWS Lambda | | Comments

    Amazon Echo is a new type of device designed around your voice. Echo connects to Alexa, a cloud-based voice service powered (of course) by AWS. You can ask Alexa to provide information, answer questions, play music, read the news, and get results or answers instantly.

    When you are in the same room as an Amazon Echo, you simply say the wake word (either “Alexa” or “Amazon”) and then make your request. For example, you might say “Alexa, when do the Seattle Mariners play next?” or “Alexa, will it ever rain in Seattle?” Behind the scenes, code running in the cloud hears, understands, and processes your spoken requests.

    Today we are giving you the ability to create new voice-driven capabilities (also known as skills) for Alexa using the new Alexa Skills Kit (ASK). You can connect existing services to Alexa in minutes with just a few lines of code. You can also build entirely new voice-powered experiences in a matter of hours, even if you know nothing about speech recognition or natural language processing.

    We will also be opening up the underlying Alexa Voice Service (AVS) to developers in preview form. Hardware manufacturers and other participants in the new and exciting Internet of Things (IoT) world can sign up today for notification when the preview is available. Any device that has a speaker, a microphone, and an Internet connection can integrate Alexa with a few lines of code.

    In order to help to inspire creativity and to fuel innovation in and around voice technology, we are also announcing the Alexa Fund. The Alexa Fund will provide up to $100 million in investments to support developers, manufacturers, and start-ups of all sizes who are creating new designed around the human voice to improve customers’ lives.

    ASK and AWS Lambda
    You can build new skills for Alexa using AWS Lambda. You simply write the code using Node.js and upload it to Lambda through the AWS Management Console, where it becomes known as a Lambda function. After you upload and test your function using the sample events built in to the Console, you can sign in to the Alexa Developer Portal, register your code in the portal (by creating an Alexa App), and then use the ARN (Amazon Resource Name) of the function to connect it to the App. After you complete your testing, you can publish your App in order to make it available to Echo owners. Lambda will take care of hosting and running your code in a scalable, fault-tolerant environment. In  many cases, the function that supports an Alexa skill will remain comfortably within the Lambda Free Tier.  Read Developing Your Alexa Skill as a Lambda Function to get started.

    ASK as a Web Service
    You can also build your app as a web service and take on more of the hosting duties yourself using Amazon Elastic Compute Cloud (EC2), AWS Elastic Beanstalk, or an on-premises server fleet. If you choose any of these options, the service must be Internet-accessible and it must adhere to the Alexa app interface specification. It must support HTTPS over SSL/TLS on port 443 and it must provide a certificate that matches the domain name of the service endpoint. Your code is responsible for verifying that the request actually came from Alexa and for checking the time-based message signature. To learn more about this option, read Developing Your Alexa App as a Web Service.

    Learn More
    We are publishing a lot of information about ASK, AVS, and the Alexa Fund today. Here are some good links to get you started:

    Jeff;

  • Focusing on Spot Instances – Let’s Talk About Best Practices

    by Jeff Barr | on | in Amazon EC2, EC2 Spot Instances | | Comments

    I often point to EC2 Spot Instances as a feature that can only be implemented at world-scale with any degree of utility.

    Unless you have a massive amount of compute power and a multitude of customers spread across every time zone in the world, with a wide variety of workloads, you simply won’t have the ever-changing shifts in supply and demand (and the resulting price changes) that are needed to create a genuine market. As a quick reminder, Spot Instances allow you to save up to 90% (when compared to On-Demand pricing) by placing bids for EC2 capacity. Instances will run whenever your bid exceeds the current Spot Price and can be terminated (with a two minute warning) in the presence of higher bids for the same (as determined by region, availability zone, and instance type) capacity.

    Because Spot Instances come and go, you need to pay attention to your bidding strategy and to your persistence model in order to maximize the value that you derive from them. Looked at another way, by structuring your application in the right way you can be in a position to save up to 90% (or, if you have a flat budget, you can get 10x as much computing done). This is a really interesting spot for you, as the cloud architect for your organization. You can exercise your technical skills to drive the cost of compute power toward zero, while making applications that are price aware and more fault-tolerant. Master the ins and outs of Spot Instances and you (and your organization) will win!

    The Trend is Clear
    As I look back at the history of EC2 — from launching individual instances on demand, then on to Spot Instances, Containers, and Spot Fleets — the trend is pretty clear. Where you once had to pay attention to individual, long-running instances and to list prices, you can now think about collections of instances with an indeterminate lifetime, running at the best possible price, as determined by supply and demand within individual capacity pools (groups of instances that share the same attributes). This new way of thinking can liberate you from some older thought patterns and can open the door to some new and intriguing ways to obtain massive amounts of compute capacity quickly and cheaply, so you can build really cool applications at a price you can afford.

    I should point out that there’s a win-win situation when it comes to Spot. You (and your customers) win by getting compute power at the most economical price possible at a given point in time. Amazon wins because our fleet of servers (see the AWS Global Infrastructure page for a list of locations) is kept busy doing productive work. High utilization improves our cost structure, and also has an environmental benefit.

    Spot Best Practices
    Over the next few months, with a lot of help from the EC2 Spot  Team, I am planning to share some best practices for the use of Spot Instances. Many of these practices will be backed up with real-world examples that our customers have shared with us; these are not theoretical or academic exercises. Today I would like to kick off the series by briefly outlining some best practices.

    Let’s define the concept of a capacity pool in a bit more detail. As I alluded to above, a capacity pool is a set of available EC2 instances that share the same region, availability zone, operating system (Linux/Unix or Windows), and instance type. Each EC2 capacity pool has its own availability (the number of instances that can be launched at any particular moment in time) and its own price, as determined by supply and demand. As you will see, applications that can run across more than one capacity pool are in the best position to consistently access the most economical compute power. Note that capacity in a pool is shared between On-Demand and Spot instances, so Spot prices can rise from either more demand for Spot instances or an increase in requests for On-Demand instances.

    Here are some best practices to get you started.

    Build Price-Aware Applications – I’ve said it before: cloud computing is a combination of a business model and a technology. You can write code (and design systems) that are price-aware, and that have the potential to make your organization’s cloud budget go a lot further. This is a new area for a lot of technologists; my advice to you is to stretch your job description (and your internal model of who you are and what your job entails) to include designing for cost savings.

    You can start by spending some time investigating (or by building some tools using the EC2 API or the AWS Command Line Interface (CLI)) the full range of capacity pools that are available to you within the region(s) that you use to run your app. High prices and a high degree of price variance over time indicate that many of your competitors are bidding for capacity in the same pool. Seek out pools that have lower prices and more stable prices (both current and historic) to find bargains and lower interruption rates.

    Check the Price History – You can access historical prices on a per-pool basis going back 90 days (3 months). Instances that are currently very popular with our customers (the R3‘s as I write this) tend to have Spot prices that are somewhat more volatile. Older generations (including c1.8xlarge, m1.small, cr1.8xlarge, and cc2.8xlarge) tend to be much more stable. In general, picking older generations of instances will result in lower net prices and fewer interruptions.

    Use Multiple Capacity Pools – Many types of applications can run (or can be easily adapted to run) across multiple capacity pools. By having the ability to run across multiple pools, you reduce your application’s sensitivity to price spikes that affect a pool or two (in general, there is very little correlation between prices in different capacity pools). For example, if you run in five different pools your price swings and interruptions can be cut by 80%.

    A high-quality approach to this best practice can result in multiple dimensions of flexibility, and access to many capacity pools. You can run across multiple availability zones (fairly easy in conjunction with Auto Scaling and the Spot Fleet API) or you can run across different sizes of instances within the same family (Amazon EMR takes this approach). For example, your app might figure out how many vCPUs it is running on, and then launch enough worker threads to keep all of them occupied.

    Adherence to this best practice also implies that you should strive to use roughly equal amounts of capacity in each pool; this will tend to minimize the impact of changes to Spot capacity and Spot prices.

    To learn more, read about Spot Instances in the EC2 Documentation.

    Stay Tuned
    As I mentioned, this is an introductory post and we have a lot more ideas and code in store for you!  If you have feedback, or if you would like to contribute your own Spot tips to this series, please send me (awseditor@amazon.com) a note.

    Jeff;

  • New AWS Quick Starts – Trend Micro Deep Security and Microsoft Lync Server

    by Jeff Barr | on | in Quick Start | | Comments

    We have prepared a pair of new AWS Quick Start Reference Deployments for you! As is the case with all AWS Quick Starts, they help you to deploy fully functional enterprise software the AWS cloud in no time flat!

    Each of the reference deployments includes a AWS CloudFormation template that follows best AWS practices for security and availability. These templates can be used as-is, customized, or used as the basis for solutions that are even more elaborate.

    Trend Micro Deep Security
    Trend Micro Deep Security is a host-based security product that provides intrusion detection and prevention, anti-malware, host firewall, file and system integrity monitoring, and log inspection modules in a single agent running in the guest operating system.

    The Quick Start (Trend Micro Deep Security on the AWS Cloud) deploys Trend Micro Deep Security version 9.5 into an Amazon VPC using AMIs from the AWS Marketplace. It includes a pair of templates. The first one provides and end-to-end deployment into a new VPC; the second one works within an existing VPC.

    Microsoft Lync Server

    Lync Server 2013 is a communications software platform that offers instant messaging (IM), presence, conferencing, and telephony solutions for small, medium, and large businesses.

    The Quick Start (Microsoft Lync Server 2013 on the AWS Cloud) implements a small or medium-sized Lync Server environment. This environment includes a pair of Lync Server 2013 Standard Edition pools across two Availability Zones for high availability.

    Jeff;

  • New – Tag Your Amazon Glacier Vaults

    by Jeff Barr | on | in Amazon Glacier | | Comments

    Amazon Glacier is a secure, durable, and extremely low-cost storage service for data archiving and online backup (see my post, Amazon Glacier: Archival Storage for One Penny Per GB Per Month for an introduction).

    Since we introduced Glacier in the summer of 2012, we have made it even more useful by adding lifecycle management, data retrieval policies & audit logging, range retrieval, and vault access policies.

    Tag Your Vaults
    If you are already a Glacier user (or if you have read my intro), you know that you create archives and store them in Glacier vaults.

    Today we are making Glacier even more useful by giving you the ability to tag your vaults. You can use these tags for cost allocation purposes (by department, group, or any other desired categorization) or for other forms of tracking.

    Here’s how you tag a vault with a key named “Department”:

    After you have tagged your vaults, you can use the AWS Cost Allocation Reports to view a breakdown of costs and usage by tag.

    As part of today’s launch, we updated the design of the Glacier console. We also made some speed improvements and added a filtering mechanism to make it easier for you to locate a particular vault. For example, here are all of my “Backup” vaults:

    This new feature is available now and you can start using it today! To learn more, read about Tagging Your Glacier Vaults.

    Jeff;

  • Now Available – AWS SDK For Python (Boto3)

    by Jeff Barr | on | in AWS SDK for Python, Developers | | Comments

    My colleague Peter Moon sent the guest post below to introduce the newest version of the AWS SDK for Python also known as Boto.

    — Jeff;


     

    Originally started as a Python client for Amazon S3 by Mitch Garnaat in 2006, Boto has been the primary tool for working with Amazon Web Services for many Python developers and system administrators across the world. Since its inception, Boto has been through an exciting journey of evolution driven by countless contributors from the Python community as well as AWS. It now supports almost 40 AWS services and is downloaded hundreds of thousands of times every week, according to PyPI. Thinking of the journey Boto has been through, I am very excited today to announce the next chapter in its history: the general availability of Boto3, the next major version of Boto.

    Libraries must adapt to changes in users’ needs and also to changes in the platforms on which they run. As AWS’s growth accelerated over the years, the speed at which our APIs are updated has also gotten faster. This required us to devise a scalable method to quickly deliver support for multiple API updates every week, and this is why AWS API support in Boto3 is almost completely data-driven. Boto3 has ‘client’ classes that are driven by JSON-formatted API models that describe AWS APIs, so most new service features only require a simple model update. This allows us to deliver support for API changes very quickly, in consistent and reliable manner.

    Boto comes with many convenient abstractions that hide explicit HTTP API calls and offer intuitive Python classes for working with AWS resources such as Amazon Elastic Compute Cloud (EC2) instances or Amazon Simple Storage Service (S3) buckets. We formalized this concept in Boto3 and named it Resource APIs, which are also data-driven by resource models that build on top of API models. This architecture allows us to deliver convenient object-oriented abstractions in a scalable manner not just for Boto3, but other AWS SDKs by sharing the same models across languages.

    Python 3 had been one of the most frequent feature requests from Boto users until we added support for it in Boto last summer with much help from the community. While working on Boto3, we have kept Python 3 support in laser focus from the get go, and each release we publish is fully tested on Python versions 2.6.5+, 2.7, 3.3, and 3.4. So customers using any of these Python versions can have full confidence that Boto3 will work in their environment.

    Lastly, while we encourage all new projects to use Boto3 instead of Boto, and existing projects to migrate to Boto3, we understand migrating existing code base to a new major version can be difficult, time-consuming, or sometimes even nearly impossible. To alleviate the pain, Boto3 has a new top-level module name (boto3), so it can be used side-by-side with your existing code that uses Boto. This makes it easy for customers to start using all new features and API support available in Boto3, even if they’re only making incremental updates to an existing project.

    As always, you can find us on GitHub (https://github.com/boto/boto3). We would love to hear any questions or feedback you have in the Issues section of the repository.

    To get started, install Boto3 and read the docs!

    $ pip install boto3
    

    Peter Moon, Senior Product Manager, AWS SDKs and Tools