Amazon Web Services Blog

  • AWS Trusted Advisor For Everyone

    AWS Trusted Advisor is your customized cloud expert! It helps you to observe best practices for the use of AWS by inspecting your AWS environment with an eye toward saving money, improving system performance and reliability, and closing security gaps. Since we launched Trusted Advisor in 2013, our customers have viewed over 1.7 million best-practice recommendations for cost optimization, performance improvement, security, and fault tolerance and have reduced their costs by about 300 million dollars.

    Today I have two big pieces of news for all AWS users. First, we are making a set of four Trusted Advisor best practices available at no charge. Second, we are moving the Trusted Advisor into the AWS Management Console.

    Four Best Practices at no Charge
    The following Trusted Advisor checks are now available to all AWS users at no charge:

    Service Limits Check - This check inspects your position with regard to the most important service limits for each AWS product. It alerts you when you using more than 80% of your allocation resources such as EC2 instances and EBS volumes.

    Security Groups - Specific Ports Unrestricted Check - This check will look for and notify you of overly permissive access to your EC2 instances and help you to avoid malicious activities such as hacking, denial-of-service attacks, and loss of data.

    IAM Use Check - This check alerts you if you are using account-level credentials to control access to your AWS resources instead of following security best practices by creating users, groups, and roles to control access to the resources.

    MFA on Root Account Check - This check recommends the use of multi-factor authentication (MFA), to improve security by requiring additional authentication data from a secondary device.

    You can subscribe to the Business or Enterprise level of AWS Support in order to gain access to the remaining 33 checks (with more on the way).

    Trusted Advisor in the Console
    The Trusted Advisor is now an integral part of the AWS Management Console. We have fine-tuned the user interface to simplify navigation and to make it even easier for you to find and to act on recommendations and to filter out recommendations that you no longer want to see.

    Let's take a tour of the Trusted Advisor, starting from the Dashboard. I can see a top-level summary of all four categories of checks at a glance:

    Each category actually contains four distinct links. If I click on the large icon associated with each category I can see a summary of the checks without regard to their severity or status. Clicking on the smaller green, orange, or red icons will take you to items with no problems, items where investigation is recommended, and items where action is recommended, respectively. It looks like I have room for some improvements in my fault tolerance:

    I can use the menu at the top to filter the checks (this is equivalent to using the green, orange, and red icons):

    If I sign up for the Business or Enterprise level of support, I can also choose to tell Trusted Advisor to selectively exclude certain resources from the checks. In the following case, I am running several Amazon Relational Database Service (RDS) instances without Multi-AZ. They are test databases and high-availability isn't essential so I can exclude them from the test results:

    I can also download the results of each check for further analysis or distribution:

    I can even ask Trusted Advisor to send me a status update each week:

    With the introduction of the console, we are also introducing a new, IAM-based model to control access to the results of each check and the actions associated with them in the console. To learn more about this important new feature, read about Controlling Access to the Trusted Advisor Console.

    Available Now
    As always (I never get tired of saying this), these new features are available now and you can start using them today!

    -- Jeff;

  • Route 53 Update - Domain Name Registration, Geo Routing, and a Price Reduction

    Amazon Route 53 is a highly available and scalable Domain Name Service (DNS), including a powerful Health Checking Service. Today we are extending Route 53 with support for domain name registration and management and Geo DNS. We are also reducing the price for Route 53 queries! Let's take a closer look at each one of these items.

    Domain Name Registration and Management
    I registered my first domain name in 1995! Back then, just about every aspect of domain management and registration was difficult, expensive, and manual. After you found a good name, you had to convince one or two of your tech-savvy friends to host your DNS records, register the name using an email-based form, and then bring your site online. With the advent of web-based registration and multiple registrars the process became a lot smoother and more economical.

    Up until now, you had to register your domain at an external registrar, create the Hosted Zone in Route 53, and then configure your domain's entry at the registrar to point to the Route 53 name servers. With today's launch of Route 53 Domain Name Registration, you can now take care of the entire process from within the AWS Management Console (API access is also available, of course). You can buy, manage, and transfer (both in and out) domains from a wide selection of generic and country-specific top-level domains (TLDs). As part of the registration process, we'll automatically create and configure a Route 53 Hosted Zone for you. You can think up a good name, register it, and be online with static (Amazon Simple Storage Service (S3)) or dynamic content (Amazon Elastic Compute Cloud (EC2), AWS Elastic Beanstalk, or AWS OpsWorks) in minutes.

    If you, like many other AWS customers, own hundreds or thousands of domain names, you know first-hand how much effort goes in to watching for pending expirations and renewing your domain names. By transferring your domain to Route 53, you can take advantage of our configurable expiration notification and our optional auto-renewal. You can avoid embarrassing (and potentially expensive) mistakes and you can focus on your application instead of on your domain names. You can even reclaim the brain cells that once stored all of those user names and passwords.

    Let's walk through the process of finding and registering a domain name using the AWS Management Console and the Route 53 API.

    The Route 53 Dashboard gives me a big-picture view of my Hosted Zones, Health Checks, and Domains:

    I begin the registration process by entering the desired name and selecting a TLD from the menu:

    The console checks on availability within the selected domain and in some other popular domains. I can add the names I want to the cart (.com and .info in this case):


    Then I enter my contact details:

    I can choose to enable privacy protection for my domain. This option will hide most of my personal information from the public Whois database in order to thwart scraping and spamming.

    When everything is ready to go, I simply agree to the terms and my domain(s) will be registered:

    I can see all of my domains in the console:

    I can also see detailed information on a single domain:

    I can also transfer domains into or out of Route 53:

    As I mentioned earlier, I can also investigate, purchase, and manage domains through the Route 53 API. Let's say that you are picking a name for a new arrival to your family and you want to make sure that you can acquire a suitable domain name (in most cases, consultation with your significant other is also advisable). Here's some code to automate the entire process! I used the AWS SDK for PHP.

    The first step is to set the desired last name and gender, and the list of acceptable TLDs:

    $LastName = 'Barr';
    $Gender   = 'F';
    $TLDs     = array('.com', '.org');
    

    Then I include the AWS SDK and the PHP Simple HTML DOM and create the Route 53 client object:

    require 'aws.phar';
    require 'simple_html_dom.php';
    
    // Connect to Route 53
    $Client = \Aws\Route53Domains\Route53DomainsClient::factory(array('region' => 'us-east-1'));
    

    Now I need an array of the most popular baby names. I took this list and parsed the HTML to create a PHP array:

    $HTML       = file_get_html("http://www.babycenter.com/top-baby-names-2013");
    $FirstNames = array();
    
    $Lists = $HTML->find('table tr ol');
    $Items = $Lists[($Gender == 'F') ? 0 : 1];
    
    foreach ($Items->find('li') as $Item)
    {
      $FirstNames[] = $Item->find('a', 0)->innertext;
    }
    

    With the desired last name and the list of popular first names in hand (or in memory to be precise), I can generate interesting combinations and call the Route 53 checkDomainAvailability function to see if they are available:

    foreach ($FirstNames as $FirstName)
    {
      foreach ($TLDs as $TLD)
      {
        $DomainName = $FirstName . '-' . $LastName . $TLD;
    
        $Result = $Client->checkDomainAvailability(array(
          'DomainName'  => $DomainName,
          'IdnLangCode' => 'eng'));
      }
      echo "{$DomainName}: {$Result['Availability']}\n";
    }
    

    I could also choose to register the first available name (again, consultation with your significant other is recommended here). I'll package up the contact information since I'll need it a couple of times:

    $ContactInfo = array(
      'ContactType'      => 'PERSON',
      'FirstName'        => 'Jeff',
      'LastName'         => 'Barr',
      'OrganizationName' => 'Amazon Web Services',
      'AddressLine1'     => 'XXXX  Xth Avenue',
      'City'             => 'Seattle',
      'State'            => 'WA',
      'CountryCode'      => 'US',
      'ZipCode'          => '98101',
      'PhoneNumber'      => '+1.206XXXXXXX',
      'Email'            => 'jbarr@amazon.com');
    

    And then I use the registerDomain function to register the domain:

    if ($Result['Availability'] === 'AVAILABLE')
    {
      echo "Registering {$DomainName}\n");
    
      $Result = $Client->registerDomain(array(
        'DomainName'              => $DomainName,
        'IdnLangCode'             => 'eng',
        'AutoRenew'               => true,
        'DurationInYears'         => 1,
        'BillingContact'          => $ContactInfo,
        'RegistrantContact'       => $ContactInfo,
        'TechContact'             => $ContactInfo,
        'AdminContact'            => $ContactInfo,
        'OwnerPrivacyProtected'   => true,
        'AdminPrivacyProtected'   => true,
        'TechPrivacyProtected'    => true,
        'BillingPrivacyProtected' => true));
    }
    

    Geo Routing
    Route 53's new Geo Routing feature lets you choose the most appropriate AWS resource for content delivery based on the location where the DNS queries originate. You can now build applications that respond more efficiently to user requests, with responses that are wholly appropriate for the location. Each location (a continent, a country, or a US state) can be independently mapped to static or dynamic AWS resources. Some locations can receive static resources served from S3 while others receive dynamic resources from an application running on EC2 or Elastic Beanstalk.

    You can use this feature in many different ways. Here are a few ideas to get you started:

    • Global Applications - Route requests to Amazon Elastic Compute Cloud (EC2) instances hosted in an AWS Region that is in the same continent as the request. You could do this to maximize performance or to meet legal or regulatory requirements.
    • Content Management - Provide users access with access to content that has been optimized, customized, licensed, or approved for their geographic location. For example, you could choose to use distinct content and resources for red and blue portions of the United States. Or, you could run a contest or promotion that is only valid in certain parts of world and use this feature to provide an initial level of filtering.
    • Consistent Endpoints - Set up a mapping of locations to endpoints to ensure that a particular location always maps to the same endpoint. If you are running a MMOG, routing based on location can increase performance, reduce latency, give you better control over time-based scaling, and increase the likelihood that users with similar backgrounds and cultures will participate in the same shard of the game.

    To make use of this feature, you simply create some Route 53 Record Sets that have the Routing Policy set to Geolocation. Think of each Record Set as a mapping from a DNS entry (e.g. www.jeff-barr.com) to a particular AWS resource an S3 bucket, an EC2 instance, or an Elastic Load Balancer. With today's launch, each Record Set with a Geolocation policy becomes effective only when the incoming request for the DNS entry originates within the bounds (as determined by an IP to geo lookup) of a particular continent, country, or US state. The Record Sets form a hierarchy in the obvious way and the most specific one is always used. You can also choose to create a default entry that will be used if no other entries match.

    You can set up this feature from the AWS Management Console, the Route 53 API, or the AWS Command Line Interface (CLI). Depending on your application, you might want to think about an implementation that generates Record Sets based on information coming from a database of some sort.

    Let's say that I want to provide static content to most visitors to www.jeff-barr.com, and dynamic content to visitors from Asia. Here's what I need to do. First I create a default Record Set for "www" that points to my S3 bucket:

    Then I create another one "www", this one Geolocated for Asia. This one points to an Elastic Load Balancer:

    Price Reduction
    Last, but certainly not least, I am happy to tell you that we have reduced the prices for Standard and LBR (Location-Based Routing) queries by 20%. The following prices go in to effect as of August 1, 2014:

    1. Standard Queries - $0.40 per million queries for the first billion queries per month; $0.20 per million queries after that.
    2. LBR Queries - $0.60 per million queries for the first billion queries per month; $0.30 per million queries after that.
    3. Geo DNS Queries - $0.70 per million queries for the first billion queries per month; $0.35 per million queries after that.

    Available Now
    These new features are available now and the price reduction goes in to effect tomorrow.

    -- Jeff;

    PS - Thanks to Steve Nelson of AP42 for digging up the Internic Domain Registration Template!

  • Auto Scaling Update - Lifecycle Management, Standby State, and DetachInstances

    Auto Scaling is a key AWS service. You can use it to build resilient, highly scalable applications that react to changes in load by launching or terminating Amazon EC2 instances as needed, all driven by system or user-defined metrics collected and tracked by Amazon CloudWatch.

    Today we are enhancing Auto Scaling with the addition of three features that give you additional control over the EC2 instances managed by each of your Auto Scaling Groups. You can now exercise additional control of the instance launch and termination process using Lifecycle Hooks. You can remove instances from an Auto Scaling Group and you can now put instances into the new Standby state for troubleshooting or maintenance.

    Lifecycle Actions & Hooks
    Each EC2 instance in an Auto Scaling Group goes through a defined set of states and state transitions during its lifetime. In response to a Scale Out Event, instances are launched, attached to the group, and become operational. Later, in response to a Scale In Event, instances are removed from the group and then terminated. With today's launch we are giving you additional control of the instance lifecycle at the following times:

    • After it has been launched but before it is attached to the group (Auto Scaling calls this state Pending). This is your opportunity to perform any initialization operations that are needed to fully prepare the instance. You can install and configure software, create, format, and attach EBS volumes, connect the instance to message queues, and so forth.
    • After it has been detached from the group but before it has been terminated (Auto Scaling calls this state Terminating). You can do any additional work that is needed to fully decommission the instance. You can capture a final snapshot of any work in progress, move log files to long-term storage, or hold malfunctioning instances off to the side for debugging.

    You can configure a set of Lifecycle actions for each of your Auto Scaling Groups. Messages will be sent to a notification target for the group (an SQS queue or an SNS topic) each time an instance enters the Pending or Terminating state. Your application is responsible for handling the messages and implementing the appropriate initialization or decommissioning operations.

    After the message is sent, the instance will be in the Pending:Wait or Terminating:Wait state, as appropriate. Once the instance enters this state, your application is given 60 minutes to do the work. If the work is going to take more than 60 minutes, your application can extend the time by issuing a "heartbeat" to Auto Scaling. If the time (original or extended) expires, the instance will come out of the wait state.

    After the instance has been prepared or decommissioned, your application must tell Auto Scaling that the lifecycle action is complete, and that it can move forward. This will set the state of the instance to Pending:Proceed or Terminating:Proceed.

    You can create and manage your lifecycle hooks from the AWS Command Line Interface (CLI) or from the Auto Scaling API. Here are the most important functions:

    1. PutLifecycleHook - Create or update a lifecycle hook for an Auto Scaling Group. Call this function to create a hook that acts when instances launch or terminate.
    2. CompleteLifecycleAction - Signify completion of a lifecycle action for a lifecycle hook. Call this function when your hook has successfully set or up decommissioned an instance.
    3. RecordLifecycleActionHeartbeat - Record a heartbeat for a lifecycle action. Call this function to extend the timeout for a lifecycle action.

    Standby State
    You can now move an instance from the InService state to the Standby state, and back again. When an instance is standing by, it is still managed by the Auto Scaling Group but it is removed from service until you set it back to the InService state. You can use this state to update, modify, or troubleshoot instances. You can check on the state of the instance after specific events, and you can set it aside in order to retrieve important logs or other data.

    If there is an Elastic Load Balancer associated with the Auto Scaling Group, the transition to the standby state will deregister the instance from the Load Balancer. The transition will not take effect until traffic ceases; this may take some time if you enabled connection draining for the Load Balancer.

    DetachInstances
    You can now remove an instance from an Auto Scaling Group and manage it independently. The instance can remain unattached, or you can attach it to another Auto Scaling Group if you'd like. When you call the DetachInstances function, you can also request a change in the desired capacity for the group.

    You can use this new functionality in a couple of different ways. You can move instances from one Auto Scaling Group to another to effect an architectural change or update. You can experiment with a mix of different EC2 instance types, adding and removing instances in order to find the best fit for your application.

    If you are new to the entire Auto Scaling concept, you can use this function to do some experimentation and to gain some operational experience in short order. Create a new Launch Configuration using the CreateLaunchConfiguration and a new Auto Scaling Group using CreateAutoScalingGroup, supplying the Instance Id of an existing EC2 instance in both cases. Do your testing and then call DetachInstances to take the instance out of the Auto Scaling Group.

    You can also use the new detach functionality to create an "instance factory" of sorts. Suppose your application assigns a fresh, fully-initialized EC2 instance to each user when they log in. Perhaps the application takes some time to initialize, but you don't want your users to wait for this work to complete. You could create an Auto Scaling Group and set it up so that it always maintains several instances in reserve, based on the expected login rate. When a user logs in, you can allocate an instance, detach it from the Auto Scaling Group, and dedicate it to the user in short order. Auto Scaling will add fresh instances to the group in order to maintain the desired amount of reserve capacity.

    Available Now
    All three of these new features are available now and you can start using them today. They are accessible from the AWS Command Line Interface (CLI) and the Auto Scaling API.

    -- Jeff;

    Read more »
  • Amazon ElastiCache Flexible Node Placement

    30 Jul 2014 in Amazon ElastiCache | permalink

    Amazon ElastiCache makes it easy for you to deploy an in-memory cache in the cloud using the Memacached or Redis engines.

    Today we are launching a new flexible node placement model for ElastiCache. Your Cache Clusters can now span multiple Availability Zones within a Region. This will help to improve the reliability of the Cluster.

    You can now choose the Availability Zone for new nodes when you create a new Cache Cluster or add more nodes to an existing Cluster. You can specify the new desired number of nodes in each Availability Zone or you can simply choose the Spread Nodes Across Zones option. If the cluster is within a Virtual Private Cloud (VPC) you can place nodes in Availability Zones that are part of the selected cache subnet group (read Createing a Cache Cluster in a VPC to learn more).

    Here is how you control node placement when you create a new Cache Cluster:

    Here is how you control zone placement when you add nodes to an existing cluster:

    This new feature is available now and you can start using it today!

    -- Jeff;

  • New Amazon Climate Research Grants

    29 Jul 2014 in HPC | permalink

    Many of my colleagues are focused on projects that lead to a reduction in the environmental impact of our work. Online shopping itself is inherently more environmentally friendly than traditional retailing. Other important initiatives include Frustration-Free Packaging, Environmentally Friendly Packaging, our global Kaizen program, Sustainable Building Design, and a selection of AmazonGreen products. On the AWS side, the US West (Oregon) and AWS GovCloud (US) Regions make use of 100% carbon-free power.

    In conjunction with our friends at NASA, we announced the OpenNEX (NASA Earth Exchange) program and the OpenNEX Challenge late last year. OpenNEX is a collection of data sets produced by Earth science satellites (over 32 TB at last count) and a set of virtual labs, lectures, and Amazon Machine Images (AMIs) for those interested in learning about the data and how to process it on AWS. For example, you can learn how to use Python, R or shell scripts to interact with the OpenNEX data, generate a true-color Landsat image, enhance Landsat images with atmospheric corrections, or work with the NEX Downscaled Climate projections (NEXDCP-30).

    Amazon Climate Research Grants
    We are interested in exploring ways to use computational analysis to drive innovative research in to climate change. In order to help to drive this work forward, we are now calling for proposals for Amazon Climate Research Grants. In early September, we will award grants of free access to supercomputing resources running on the Amazon Elastic Compute Cloud (EC2).

    The grants will provide access to more than fifty million core hours via the EC2 Spot Market. Our goal is to encourage and accelerate research that will result in an improved understanding of the scope and effects of climate change, along with analyses that could suggest potential mitigating actions. Recipients have the opportunity to deliver an update on their progress and to reveal early findings at the AWS re:Invent conference in mid-November.

    Timelines
    If you are interested in applying for an Amazon Climate Research Grant, here are some dates to keep in mind:

    • July 29, 2014 - Call for proposals opens.
    • August 29, 2014 - Submissions are due.
    • Early September 2014 - Recipients notified; AWS grants issued.
    • November 2014 - Recipients present initial research and findings at AWS re:Invent.

    To learn more or to submit a proposal, please visit the Amazon Climate Change Grants page.

    AWS for HPC
    Let's wrap up this post with a quick look at an AWS HPC success story!

    The Globus team at the University of Chicago/Argonne National Lab used an AWS Research grant to create the Galaxy instance and use EC2 Spot instances to run various climate impact models and applications that project irrigation water availability and agricultural production under climate change. You can learn more about this "Science as a Service on AWS" project by taking a peek at the following presentation:

    Your Turn
    I am looking forward to taking a look at the proposals and to seeing the first results at re:Invent. If you have an interesting and relevant project in mind, I invite you to apply now!.

    -- Jeff;

    Read more »
  • AWS Week in Review - July 21, 2014

    28 Jul 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, July 21
    Tuesday, July 22
    Wednesday, July 23
    Thursday, July 24
    Friday, July 25

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • Elastic Load Balancing Connection Timeout Management

    When your web browser or your mobile device makes a TCP connection to an Elastic Load Balancer, the connection is used for the request and the response, and then remains open for a short amount of time for possible reuse. This time period is known as the idle timeout for the Load Balancer and is set to 60 seconds. Behind the scenes, Elastic Load Balancing also manages TCP connections to Amazon EC2 instances; these connections also have a 60 second idle timeout.

    In most cases, a 60 second timeout is long enough to allow for the potential reuse that I mentioned earlier. However, in some circumstances, different idle timeout values are more appropriate. Some applications can benefit from a longer timeout because they create a connection and leave it open for polling or extended sessions. Other applications tend to have short, non- recurring requests to AWS and the open connection will hardly ever end up being reused.

    In order to better support a wide variety of use cases, you can now set the idle timeout for each of your Elastic Load Balancers to any desired value between 1 and 3600 seconds (the default will remain at 60). You can set this value from the command line or through the AWS Management Console.

    Here's how to set it from the command line:

    $ elb-modify-lb-attributes myTestELB --connection-settings "idletimeout=120" --headers
    

    And here is how to set it from the AWS Management Console:

    This new feature is available now and you can start using it today! Read the documentation to learn more.

    -- Jeff;

  • Big Data Update - New Blog and New Web-Based Training

    22 Jul 2014 in Big Data, Training | permalink

    The topic of big data comes up almost every time I meet with current or potential AWS customers. They want to store, process, and extract meaning from data sets that seemingly grow in size with every meeting.

    In order to help our customers to understand the full spectrum of AWS resources that are available for use on their big data problems, we are introducing two new resources -- a new AWS Big Data Blog and web-based training on Big Data Technology Fundamentals.

    AWS Big Data Blog
    The AWS Big Data Blog is a way for data scientists and developers to learn big data best practices, discover which managed AWS Big Data services are the best fit for their use case, and help them get started on AWS big data services. Our goal is to make this the hub for developers to discover new ways to collect, store, clean, process, and visualize data at any scale.

    Readers will find short tutorials with code samples, case studies that demonstrate the unique benefits of doing big data on AWS, new feature announcements, partner- and customer-generated demos and tutorials, and tips and best practices for using AWS big data services.

    The first two posts on the blog show you how to Build a Recommender with Apache Mahout on Amazon Elastic MapReduce and how to Power Gaming Applications with Amazon DynamoDB.

    Big Data Training
    If you are looking for a structured way to learn more about the tools, techniques, and options available to you as you learn more about big data, our new web-based Big Data Technology Fundamentals course should be of interest to you.

    You should plan to spend about three hours going through this course. You will first learn how to identify common tools and technologies that can be used to create big data solutions. Then you will gain an understanding of the MapReduce framework, including the map, shuffle and sort, and reduce components. Finally, you will learn how to use the Pig and Hive programming frameworks to analyze and query large amounts of data.

    You will need a working knowledge of programming in Java, C#, or a similar language in order to fully benefit from this training course.

    The web-based course is offered at no charge, and can be used on its own or to prepare for our instructor-led Big Data on AWS course.

    -- Jeff;

  • AWS Week in Review - July 14, 2014

    21 Jul 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, July 14
    Tuesday July 15
    Wednesday, July 16
    Thursday, July 17
    Friday, July 18

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • AWS Support API Update - Attachment Creation and Lightweight Monitoring

    16 Jul 2014 in AWS Support | permalink

    The AWS Support API provides you with a programmatic access to your support cases and to the AWS Trusted Advisor. Today we are extending the API in order to give you more control over the cases that you create and a new, lightweight way to access information about your cases. The examples in this post make use of the AWS SDK for Java.

    Creating Attachments for Support Cases
    When you create a support case, you may want to include additional information along with the case. Perhaps you want to attach some sample code, a protocol trace, or some screen shots. With today's release you can now create, manage, and use attachments programmatically.

    My colleague Kapil Sharma provided me with some sample Java code to show you how to do this. Let's walk through it. The first step is to create an Attachment from a file (File1 in this case):

    Attachment attachment = new Attachment;
    Attachment.setData(ByteBuffer.wrap(File.readAllBytes(FileSystems.getDefault().getPath("", "File1"))));
    attachment.setFileName("Attachment.txt");
    

    Then you create a List of the attachments for the case:

    List<Attachment> attachmentSet = New ArrayList<Attachment>();
    attachmentSet.add(attachment);
    

    And upload the attachments:

    AddAttachmentsToSetRequest addAttachmentsToSetRequest = new AddAttachmentsToSetRequest();
    addAttachmentsToSetRequest.setAttachments(attachmentSet);
    AddAttachmentsToSetResult addAttachmentsToSetResult = client.addAttachmentsToSet(addAttachmentsToSetRequest);
    

    With the attachment or attachments uploaded, you next need to get an Id for the set:

    String attachmentSetId = addAttachmentsToSetResult.getAttachmentSetId();
    

    And then you are ready to create the actual support case:

    CreateCaseRequest request = new CreateCaseRequest()
        .withAttachmentSetId(attachmentSetId)
        .withServiceCode(serviceCode)
        .withCategoryCode(categoryCode)
        .withLanguage(language)
        .withCcEmailAddresses(ccEmailAddress)
        .withCommunicationBody(communicationBody)
        .withSubject(caseSubject)
        .withSeverityCode(severityCode);
    
    CreateCaseResult result = client.createCase(request);
    


    Once you have created a support case or two, you probably want to check on their status. The describeCases function lets you do just that. In the past, this function returned a detailed response that included up to 15 MB of attachments. With today's enhancement, you can now ask for a lightweight response that does not include any attachments. If you are calling describeCases to check for changes in status, you can now do this in a more efficient fashion.

    DescribeCaseRequest request = new DescribeCaseRequest();
    
    Request.withcaseIdList(caseId);
    Request.withIncludeCommunications(false);
    client.describeCases(request)
    

    To learn more about creating and managing cases programmatically, take a look at Programming the Life of an AWS Support Case.

    Available Now
    The new functionality described in this post is available now and you can start using them today! The SDK for PHP, SDK for .NET, SDK for Ruby, SDK for Java, SDK for JavaScript in the Browser, and the AWS Command Line Interface (CLI) have been updated.

    -- Jeff;