AWS Developer Blog

Clock-skew correction

by Pavel Safronov | on | in .NET | Permalink | Comments |  Share

Clock skew is the difference in time between two computers. In the context of this blog post, it’s the difference between the time on a computer running your .NET application (client) and Amazon’s (server). If the client time is different from server time by more than about 15 minutes, the requests your application makes will be signed with the incorrect time, and the server will reject them with an InvalidSignatureException or similar error.

The solution to this problem is to correct your system clock, but unfortunately that isn’t always an option. The application may not have permissions to update the time, or the user may have set an incorrect time on purpose. The latest release of the AWS SDK for .NET includes a new feature to help out in this case: the SDK will now identify and correct for clock skew. This feature is enabled by default, so you don’t have to make any changes to your application.

For the most part, this process is transparent: the SDK will make a request, and if the server responds with a clock skew error, the SDK will calculate a clock offset (how much client time is different from server time) and will then retry the original request with the correct time. If you are interested in knowing the clock offset that the SDK calculated, the SDK stores this value in AWSConfigs.ClockOffset. You can also turn this feature on or off with the AWSConfigs.CorrectForClockSkew property or by using the below configuration, though disabling clock skew correction will of course result in the SDK throwing signature errors if there is clock skew on your system.

<configuration>
  <configSections>
    <section name="aws" type="Amazon.AWSSection, AWSSDK" />
  </configSections>
  <aws correctForClockSkew="true OR false" />
</configuration>

Amazon DynamoDB Document API in Ruby (Part 3 – Update Expressions)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

As we showed in previous posts, it’s easy to put JSON items into Amazon DynamoDB, retrieve specific attributes with projection expressions, and fetch only data that meet some criteria with condition expressions. Now, let’s take a look at how we can conditionally modify existing items with Update Expressions. (Note: this code uses the same ProductCatalog table we used in Parts 1 and 2).

In the following examples, we use the following helper method to perform conditional updates. It performs the UpdateItem operation with return_values set to return the old item. We also use the GetItem operation so the method can return both the old and new items for us to compare. (If the update condition in the request is not met, then the method sets the returned old item to nil.)

def do_update_item(key_id, update_exp, condition_exp, exp_attribute_values)
  begin
    old_result = @dynamodb.update_item(
      :update_expression => update_exp,
      :condition_expression => condition_exp,
      :expression_attribute_values => exp_attribute_values,
      :table_name => "ProductCatalog",
      :key => { :Id => key_id },
      :return_values => "ALL_OLD",
    ).data.attributes
  rescue Aws::DynamoDB::Errors::ConditionalCheckFailedException
    old_result = nil
    puts "Condition not met"
  end

  new_result = @dynamodb.get_item(
    :table_name => "ProductCatalog", :key => { :Id => key_id },
    :consistent_read => true
  ).data.item  

  return old_result, new_result
end

Using Conditional Update Expressions

Updates in DynamoDB are atomic. This allows applications to concurrently update items without worrying about conflicts occurring. For example, the following code demonstrates maintaining a MAX value in DynamoDB with a conditional update using SET. Note that, because DynamoDB is schema-less, we don’t need to define the HighestRating attribute beforehand. Instead, we create it on the first call.

# storing a "max" value with conditional SET
# SET attribute if doesn't exist, otherwise SET if stored highest rating < this rating
def update_highest_rating(rating)
  do_update_item(303,
    "SET HighestRating = :val",
    "attribute_not_exists(HighestRating) OR HighestRating < :val",
    {
      ":val" => rating
    }
  )
end

# multiple threads trying to SET highest value (ranging from 0 to 10)
threads = []
(0..10).to_a.shuffle.each { |i|
  # some number of "Condition not met" depending on shuffled order
  puts i
  threads[i] = Thread.new {
    update_highest_rating(i)
  }
}
threads.each {|t| t.join}

# fetch the item and examine the HighestRating stored
puts "Max = #{@dynamodb.get_item(
  :table_name => "ProductCatalog", :key => { :Id => 303 }
).data.item["HighestRating"].to_i}"   # Max = 10

We can also use update expressions to atomically maintain a count and add to a set:

# ADD to intialize/increment and add to set
threads = []
20.times do |i|
  threads[i] = Thread.new {
    do_update_item(303,
      "ADD TimesViewed :val, Tags :was_here",
      nil, # no condition expression
      {
        # Each of the 20 threads increments by 1
        ":val" => 1,

        # Each thread adds to the tag set
        # Note: type must match stored attribute's type
        ":was_here" => Set.new(["#Thread#{i}WasHere"])
      }
    )
  }
end
threads.each {|t| t.join}

# fetch the item and examine the TimesViewed attribute
item = @dynamodb.get_item(
  :table_name => "ProductCatalog", :key => { :Id => 303 }
).data.item

puts "TimesViewed = #{item["TimesViewed"].to_i}"
# TimesViewed = 20

puts "Tags = #{item["Tags"].inspect}"
# Tags = #<Set: {"#Mars", "#MarsCuriosity", "#StillRoving", ..each thread was here...}>

Similarly, we can decrement the count and remove from the set to undo our previous operations.

# Undo the views and set adding that we just performed
threads = []
20.times do |i|
  threads[i] = Thread.new {
    do_update_item(303,
      "ADD TimesViewed :val DELETE Tags :was_here",
      nil,  # no condition expression
      {
        # Each of the 20 threads decrements by 1
        ":val" => -1,

        # Each thread removes from the tag set
        # Note: type must match stored attribute's type
        ":was_here" => Set.new(["#Thread#{i}WasHere"])
      }
    )
  }
end
threads.each {|t| t.join}

# fetch the item and examine the TimesViewed attribute
item = @dynamodb.get_item(
  :table_name => "ProductCatalog", :key => { :Id => 303 }
).data.item

puts "TimesViewed = #{item["TimesViewed"].to_i}"
# TimesViewed = 0

puts "Tags = #{item["Tags"].inspect}"
# Tags = #<Set: {"#Mars", "#MarsCuriosity", "#StillRoving"}>

We can also use the REMOVE keyword to delete attributes, such as the HighestRating and TimesViewed attributes we added in the previous code.

# removing attributes from items
old_and_new = do_update_item(303,
  "REMOVE HighestRating, TimesViewed",
  nil,  # no condition expression
  nil,  # no attribute expression values
)

puts "OLD HighestRating is nil ? #{old_and_new[0]["HighestRating"] == nil}"
#=> false

puts "OLD TimesViewed is nil ? #{old_and_new[0]["TimesViewed"] == nil}"
#=> false

puts "NEW HighestRating is nil ? #{old_and_new[1]["HighestRating"] == nil}"
#=> true

puts "NEW TimesViewed is nil ? #{old_and_new[1]["TimesViewed"] == nil}"
#=> true

Conclusion

We hope this series was helpful in demonstrating expressions and how they allow you to interact with DynamoDB more flexibly than before. We’re always interested in hearing what developers would like to see in the future, so let us know what you think in the comments or through our forums!

Amazon DynamoDB Document API in Ruby (Part 2 – Condition Expressions)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

As we showed in the previous post, it’s easy to put JSON items into Amazon DynamoDB and retrieve specific attributes with projection expressions. Condition Expressions provide a more flexible and SQL-like way to retrieve only the items you want from DynamoDB. First, let’s put a few more items into DynamoDB using a BatchWriteItem operation. (Note: this code uses the same ProductCatalog table we used in Part 1)

# add some more items
@dynamodb.batch_write_item(
  :request_items => {
    "ProductCatalog" => [

      {:put_request => { :item => {
        Id: 300,
        Title: "Sojourner",
        Description: "Mars Pathfinder robotic Mars rover",
        Price: BigDecimal.new("2.65e8"),
        LaunchDate: {
          M: 12, D: 4, Y: 1996
        },
        LostCommunicationDate: {
          M: 9, D: 27, Y: 1997
        },
        Features: {
          Rover: true,
        },
        NumberInStock: 10,
        OrdersPlaced: 3,
        Tags: ["#Mars", "#InStarTrekSeason4", "#InRedPlant2000", "#LostComms"],
      }}},

      {:put_request => { :item => {
        Id: 301,
        Title: "Spirit",
        Description: "Mars Exploration Rover – A",
        Price: BigDecimal.new("4.1e8"),
        LaunchDate: {
          M: 6, D: 10, Y: 2003
        },
        LostCommunicationDate: {
          M: 3, D: 22, Y: 2010
        },
        Features: {
          Rover: true,
        },
        NumberInStock: 10,
        OrdersPlaced: 5,
        Tags: Set.new(["#Mars", "#StuckOnMars", "#LostComms"]),
      }}},

      {:put_request => { :item => {
        Id: 302,
        Title: "Opportunity",
        Description: "Mars Exploration Rover – B",
        Price: BigDecimal.new("4.1e8"),
        LaunchDate: {
          M: 7, D: 7, Y: 2003
        },
        LostCommunicationDate: nil,
        Features: {
          Rover: true,
        },
        NumberInStock: 10,
        OrdersPlaced: 10,
        Tags: Set.new(["#Mars", "#StillRoving"]),
      }}},

      {:put_request => { :item => {
        Id: 303,
        Title: "Curiosity",
        Description: "car-sized robotic rover",
        Price: BigDecimal.new("2.5e9"),
        LaunchDate: {
          M: 11, D: 26, Y: 2011
        },
        LostCommunicationDate: nil,
        Features: {
          Rover: true,
          RoboticArm: true,
        },
        NumberInStock: 0,
        OrdersPlaced: 30,
        Tags: Set.new(["#Mars", "#MarsCuriosity", "#StillRoving"]),
      }}},

    ]
  }
)

Using Condition Expressions

We could also use condition expressions on the results of Query, but since we’re using a simple data model (only have hash key on product Id), we demonstrate this with scans. We use the following helper method to perform the scan and format the product titles returned:

def do_scan(filter_exp, exp_attribute_values)
  result = @dynamodb.scan(
    :expression_attribute_values => exp_attribute_values,
    :filter_expression => filter_exp,   # Condition Expressions are supplied through the FilterExpression parameter
    :projection_expression => "Title",
    :table_name => "ProductCatalog"
  ).data.items

  # format all retrieved titles into a single line
  return "scan retrieved: #{(result.map { |item| item["Title"] }).join(", ")}"
end

Let’s look at some example expressions and the results they return from our current ProductCatalog table:

# All products that don't have a launch month of November (11)
puts do_scan(
  "LaunchDate.M <> :m",
  {
    ":m" => 11
  }
)
# scan retrieved: 20-Bicycle 205, Opportunity, Spirit, Sojourner


# All rover products that don't have a launch month of November
puts do_scan(
  "attribute_exists(Features.Rover) AND LaunchDate.M <> :m",
  {
    ":m" => 11,
  }
)
# scan retrieved: Opportunity, Spirit, Sojourner


# Non-rovers
puts do_scan(
  "attribute_not_exists(Features.Rover)",
  nil
)
# scan retrieved: 20-Bicycle 205


# mid-range rovers or inexpensive products
puts do_scan(
  "(Price BETWEEN :low AND :high) OR Price < :verylow",
  {
    ":verylow" => BigDecimal.new("1e8"),
    ":low" => BigDecimal.new("3e8"),
    ":high" => BigDecimal.new("5e8")
  }
)
# scan retrieved: 20-Bicycle 205, Opportunity, Spirit


# within-Item referencing: more orders placed than in stock
puts do_scan(
  "OrdersPlaced > NumberInStock",
  nil
)
# scan retrieved: Curiosity


# string prefixing
puts do_scan(
  "begins_with(Title, :s)",
  {
    ":s" => "S",
  }
)
# scan retrieved: Spirit, Sojourner


# contains
puts do_scan(
  "contains(Tags, :tag1) AND contains(Tags, :tag2)",
  {
    ":tag1" => "#StuckOnMars",
    ":tag2" => "#LostComms",
  }
)
# scan retrieved: Spirit


# contains (Note: "Tags" is a list for Sojourner)
puts do_scan(
  "contains(Tags, :tag1)",
  {
    ":tag1" => "#LostComms",
  }
)
# scan retrieved: Spirit, Sojourner


# in operator
puts do_scan(
  "Id in (:id1, :id2)",
  {
    ":id1" => 302,
    ":id2" => 303,
  }
)
# scan retrieved: Curiosity, Opportunity


# equivalently, with parentheses
puts do_scan(
  "(Id = :id1) OR (Id = :id2)",
  {
    ":id1" => 302,
    ":id2" => 303,
  }
)
# scan retrieved: Curiosity, Opportunity

Next Steps

As you can see, condition expressions enable you to write more concise code to retrieve data. They also provide querying capabilities unavailable with the original access model such as within-Item references and more flexible conditions with parentheses. In an upcoming blog post, we’ll take a closer look at how we can update existing data through update expressions.

Using NuGet and Chocolatey package managers in AWS CloudFormation and AWS Elastic Beanstalk

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

In this guest post by AWS Solutions Architect Lee Atkinson, we are going to describe how you can take advantage of the NuGet and Chocolatey package managers inside your CloudFormation templates and Elastic Beanstalk applications.

AWS CloudFormation and AWS Elastic Beanstalk support the Microsoft Windows Installer for installing .msi files onto Microsoft Windows instances managed by those services. For details on how to do this for CloudFormation, see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-packages, and for Elastic Beanstalk, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#customize-containers-windows-format-packages.

NuGet (pronounced ‘New Get’) is a package manager for installing .NET development packages onto your development machine. It is available as a Microsoft Windows Visual Studio plugin as well as a standalone command line tool. Users can install packages from, and publish packages to, a central repository for packages located at http://www.nuget.org/.

Chocolatey NuGet builds on top of NuGet to provide a package manager for Microsoft Windows applications and describes itself as "a Machine Package Manager, somewhat like apt-get, but built with Windows in mind." It has a command line tool and a central repository located at http://chocolatey.org/.

AWS CloudFormation supports the downloading of files and execution of commands on EC2 instance creation using an application called ‘cfn-init.exe’ installed on instances running Microsoft Windows. We can leverage this functionality to install and execute both NuGet and Chocolatey. For more information on bootstrapping Microsoft Windows instances in CloudFormation, see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-windows-stacks-bootstrapping.html.

Similarly, AWS Elastic Beanstalk supports the downloading of files and execution of commands on instance creation using container customization. We can use this functionality to install and execute both NuGet and Chocolatey. For more information on customizing Microsoft Windows containers in Elastic Beanstalk, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html.

Using NuGet in AWS CloudFormation

For installing NuGet packages on an EC2 instance using AWS CloudFormation, we can use the NuGet command line tool. First, we need to download the tool to the Microsoft Windows instance, and we can use the CloudFormation ‘file’ declaration. Then, to install NuGet packages, we can use a CloudFormation ‘command’ declaration.

Here’s an excerpt from an example AWS CloudFormation template to:

  1. Download NuGet.exe
  2. Install the JSON.NET NuGet package
  3. Install the Entity Framework NuGet package
"AWS::CloudFormation::Init": {
  "config": {
    "files" : {
      "c:/tools/nuget.exe" : {
        "source" : "https://nuget.org/nuget.exe"
      }
    },
    "commands" : {
      "1-create-myapp-folder" : {
        "command" : "if not exist c:\myapp mkdir c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "2-install-json-net" : {
        "command" : "c:\tools\nuget install Newtonsoft.Json -NonInteractive -OutputDirectory c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "3-install-entityframework" : {
        "command" : "c:\tools\nuget install EntityFramework -NonInteractive -OutputDirectory c:\myapp",
        "waitAfterCompletion" : "0"
      }
    }
  }
}

Using Chocolatey in AWS CloudFormation

Installing and using Chocolatey is similar to NuGet above, though the recommended way of installing Chocolatey is to execute a Microsoft Windows PowerShell script. As CloudFormation ‘command’ declarations are executed by cmd.exe, we need to execute PowerShell.exe and provide the install command to that.

The Chocolatey installer and the packages it installs may modify the machine’s PATH environment variable. This adds complexity since subsequent commands after these installations are executed in the same session, which does not have the updated PATH. To overcome this, we utilize a command file to set the session’s PATH to that of the machine before it executes our command.

Here’s an excerpt from an example AWS CloudFormation template to:

  1. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  2. Install Chocolatey
  3. Install Sublime Text 3
  4. Install Firefox
"AWS::CloudFormation::Init": {
  "config": {
    "files" : {
      "c:/tools/ewmp.cmd" : {
        "content": "@ECHO OFFnFOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" /v PATH') DO PATH %%a%%bn%*"
      }
    },
    "commands" : {
      "1-install-chocolatey" : {
        "command" : "powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))""
      },
      "2-install-sublimetext" : {
        "command" : "c:\tools\ewmp choco install sublimetext3"
      },
      "3-install-firefox" : {
        "command" : "c:\tools\ewmp choco install firefox"
      }
    }
  }
}

Using NuGet and Chocolatey together in AWS CloudFormation

Another example for NuGet is when you are cloning a repository from a version control system that does not have the NuGet packages checked-in, which means those packages are missing from the clone. In this case, you can perform a NuGet Restore, which instructs NuGet to download the packages specified within the repository.

But we need to install git before we can clone—so we use Chocolatey!

Here’s an excerpt from an example AWS CloudFormation template to:

  1. Download NuGet.exe
  2. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  3. Install Chocolatey
  4. Install Git
  5. Clone a Git repository
  6. Restore NuGet packages defined in the repository’s solution file
"AWS::CloudFormation::Init": {
  "config": {
    "files" : {
      "c:/tools/nuget.exe" : {
        "source" : "https://nuget.org/nuget.exe"
      },
      "c:/tools/ewmp.cmd" : {
        "content": "@ECHO OFFnFOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" /v PATH') DO PATH %%a%%bn%*"
      }
    },
    "commands" : {
      "1-install-chocolatey" : {
        "command" : "powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"",
        "waitAfterCompletion" : "0"
      },
      "2-install-git" : {
        "command" : "c:\tools\ewmp choco install git",
        "waitAfterCompletion" : "0"
      },
      "3-create-myapp-folder" : {
        "command" : "if not exist c:\myapp mkdir c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "4-clone-repo" : {
        "command" : "c:\tools\ewmp git clone git://github.com/aws/aws-sdk-net c:\myapp",
        "waitAfterCompletion" : "0"
      },
      "5-nuget-restore" : {
        "command" : "c:\tools\nuget restore c:\myapp\AWSSDK_DotNet.Mobile.sln",
        "waitAfterCompletion" : "0"
      }
    }
  }
}

Using NuGet and Chocolatey in AWS Elastic Beanstalk

The above examples can be translated into AWS Elastic Beanstalk config files to enable use of both NuGet and Chocolatey in Elastic Beanstalk. For Elastic Beanstalk, we create YAML .config files inside the .ebextensions folder of our source bundle.

Here’s an example .ebextensions config file to:

  1. Download NuGet.exe
  2. Install the JSON.NET NuGet package
  3. Install the Entity Framework NuGet package
files:
  c:/tools/nuget.exe:
    source: https://nuget.org/nuget.exe
commands:
  1-create-myapp-folder:
    command: if not exist c:myapp mkdir c:myapp
    waitAfterCompletion: 0
  2-install-json-net:
    command: c:toolsnuget install Newtonsoft.Json -NonInteractive -OutputDirectory c:myapp
    waitAfterCompletion: 0
  3-install-entityframework:
    command: c:toolsnuget install EntityFramework -NonInteractive -OutputDirectory c:myapp
    waitAfterCompletion: 0

Here’s an example .ebextensions config file to:

  1. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  2. Install Chocolatey
  3. Install Sublime Text 3
  4. Install Firefox
files:
  c:/tools/ewmp.cmd:
    content: |
      @ECHO OFF
      FOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLMSystemCurrentControlSetControlSession ManagerEnvironment" /v PATH') DO PATH %%a%%b
      %*
commands:
  1-install-chocolatey:
    command: powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"
  2-install-sublimetext:
    command: c:toolsewmp choco install sublimetext3
  3-install-firefox:
    command: c:toolsewmp choco install firefox

Here’s an example .ebextensions config file to:

  1. Download NuGet.exe
  2. Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH
  3. Install Chocolatey
  4. Install Git
  5. Clone a Git repository
  6. Restore NuGet packages defined in the repository’s solution file
files:
  c:/tools/nuget.exe:
    source: https://nuget.org/nuget.exe
  c:/tools/ewmp.cmd:
    content: |
      @ECHO OFF
      FOR /F "tokens=3,*" %%a IN ('REG QUERY "HKLMSystemCurrentControlSetControlSession ManagerEnvironment" /v PATH') DO PATH %%a%%b
      %*
commands:
  1-install-chocolatey:
    command: powershell -NoProfile -ExecutionPolicy unrestricted -Command "Invoke-Expression ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"
    waitAfterCompletion: 0
  2-install-git:
    command: c:toolsewmp choco install git
    waitAfterCompletion: 0
  3-create-myapp-folder:
    command: if not exist c:myapp mkdir c:myapp
    waitAfterCompletion: 0
  4-clone-repo:
    command: c:toolsewmp git clone git://github.com/aws/aws-sdk-net c:myapp
    waitAfterCompletion: 0
  5-nuget-restore:
    command: c:toolsnuget restore c:myappAWSSDK_DotNet.Mobile.sln
    waitAfterCompletion: 0

Summary

I hope this provides inspiration on how you can leverage both NuGet and Chocolatey to configure your Microsoft Windows instances managed by either AWS CloudFormation or AWS Elastic Beanstalk.

Amazon DynamoDB Document API in Ruby (Part 1 – Projection Expressions)

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

Amazon DynamoDB launched JSON Document Support along with several improvements to the DynamoDB API. This post is part of a series where we’ll explore these features in more depth with the AWS SDK for Ruby V2. In particular, this post focuses on putting items into DynamoDB using the Ruby SDK and controlling the data we get back with projection expressions. At the end of the post, we also provide some helpful information for getting started with DynamoDB Local.

Putting JSON data into DynamoDB

DynamoDB now supports the following new data types: Maps, Lists, Booleans, and Nulls. Suppose we have a DynamoDB table for products with a hash key on an "Id" attribute. It’s easy to store such data into DynamoDB with native Ruby types:

# put a JSON item
item = {
  Id: 205, # hash key
  Title: "20-Bicycle 205",
  Description: "205 description",
  BicycleType: "Hybrid",
  Brand: "Brand-Company C",
  Price: 500,
  Gender: "B",
  Color: Set.new(["Red", "Black"]),
  ProductCategory: "Bike",
  InStock: true,
  QuantityOnHand: nil,
  NumberSold: BigDecimal.new("1E4"),
  RelatedItems: [
    341, 
    472, 
    649
  ],
  Pictures: { # JSON Map of views to url String
    FrontView: "http://example.com/products/205_front.jpg", 
    RearView: "http://example.com/products/205_rear.jpg",
    SideView: "http://example.com/products/205_left_side.jpg",
  },
  ProductReviews: { # JSON Map of stars to List of review Strings
    FiveStar: [
      "Excellent! Can't recommend it highly enough!  Buy it!",
      "Do yourself a favor and buy this."
    ],
    OneStar: [
      "Terrible product!  Do not buy this."
    ]
  }
}
dynamodb.put_item(:table_name => "ProductCatalog", :item => item)

Getting data from DynamoDB using projection expressions

Since DynamoDB now supports more interesting data types, we’ve also added projection expressions and expression attribute names to make it easier to retrieve only the attributes we want:

# get only the attributes we want with projection expressions
item = dynamodb.get_item(
  :table_name => "ProductCatalog",

  # Get the item with Id == 205
  :key => {
    :Id => 205
  },

  # for less typing, use expression attribute names to substitute
  # "ProductReviews" with "#pr" and "RelatedItems" with "#ri"
  :expression_attribute_names => {
    "#pr" => "ProductReviews",
    "#ri" => "RelatedItems",
  },

  # get Price, Color, FiveStar reviews, 0th and 2nd related items
  :projection_expression => "Price, Color, #pr.FiveStar, #ri[0], #ri[2], 
    #pr.NoStar, #ri[4]" # try projecting non-existent attributes too
).data.item

puts item["Price"].to_i
# 500

puts item["Color"].inspect
# #<Set: {"Black", "Red"}>

puts item["ProductReviews"]["FiveStar"][0]
# Excellent! Can't recommend it highly enough!  Buy it!

puts item["ProductReviews"]["FiveStar"][1]
# Do yourself a favor and buy this.

puts item["ProductReviews"]["OneStar"].inspect
# nil (because we only projected FiveStar reviews)

puts item["ProductReviews"]["NoStar"].inspect
# nil (because no NoStar reviews)

puts item["RelatedItems"]
# 0.341E3   (0th element)
# 0.649E3   (2nd element)

puts item["RelatedItems"].size
# 2 (non-existent 4th element not present)

Next Steps

As you can see, it’s easy to put and get items in DynamoDB with the AWS SDK for Ruby. In upcoming blog posts, we’ll take a closer look at expressions for filtering and updating data.

Feel free to get started on DynamoDB Local with the following code (note that it uses the credentials file approach for specifying AWS credentials):

#! /usr/bin/ruby

require "set"
require "bigdecimal"
require "aws-sdk-core"

# Configure SDK

# use credentials file at .aws/credentials
Aws.config[:credentials] = Aws::SharedCredentials.new
Aws.config[:region] = "us-west-2"

# point to DynamoDB Local, comment out this line to use real DynamoDB
Aws.config[:dynamodb] = { endpoint: "http://localhost:8000" }

dynamodb = Aws::DynamoDB::Client.new

## Create the table if it doesn't exist
begin
  dynamodb.describe_table(:table_name => "ProductCatalog")
rescue Aws::DynamoDB::Errors::ResourceNotFoundException
  dynamodb.create_table(
    :table_name => "ProductCatalog",
    :attribute_definitions => [
      {
        :attribute_name => :Id,
        :attribute_type => :N
      }
    ],
    :key_schema => [
      {
        :attribute_name => :Id,
        :key_type => :HASH
      }
    ],
    :provisioned_throughput => {
      :read_capacity_units => 1,
      :write_capacity_units => 1,
    }
  )

  # wait for table to be created
  puts "waiting for table to be created..."
  dynamodb.wait_until(:table_exists, table_name: "ProductCatalog")
  puts "table created!"
end

Amazon S3 Client-side Key Migration to AWS Key Management Service

by Hanson Char | on | in Java | Permalink | Comments |  Share

In an earlier blog, Taming client-side key rotation with the Amazon S3 encryption client, we introduced the putInstructionFile API that makes Amazon S3 client-side key rotation easy. In the long run, however, wouldn’t it be nice if you could eliminate the administrative overhead of managing your client-side master keys, and instead have them fully managed and protected by a trusted, secure, and highly available key management service?

This is exactly where the recently launched AWS Key Management Service (KMS) can help. In this blog, we will provide an example of how you can leverage the putInstructionFile API to migrate from the use of an S3 client-side master key to the use of a KMS-managed customer master key (CMK).  In particular, this means you can re-encrypt your existing S3 data keys (aka envelope keys) with a different master key without touching the encrypted data, and ultimately retire and remove the need to manage your own client-side master keys.

Let’s look at some specific code.

Pre-Key Migration to AWS KMS

Suppose you have a pre-existing Amazon S3 client-side master key used for Amazon S3 client-side encryption. The code involved to encrypt and decrypt would typically look like this:


        // Encryption with a client-side master key
        SecretKey clientSideMasterKey = ...;
        SimpleMaterialProvider clientSideMaterialProvider = 
            new SimpleMaterialProvider().withLatest(
                    new EncryptionMaterials(clientSideMasterKey));
        AmazonS3EncryptionClient s3Old = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                clientSideMaterialProvider)
            .withRegion(Region.getRegion(Regions.US_EAST_1));
        
        // Encrypts and saves the data under the name "sensitive_data.txt" to
        // S3. Under the hood, the one-time randomly generated data key is 
        // encrypted by the client-side master key.
        byte[] plaintext = "Demo S3 Client-side Key Migration to AWS KMS!"
                .getBytes(Charset.forName("UTF-8"));
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(plaintext.length);
        String bucket = ...;
        PutObjectResult putResult = s3Old.putObject(bucket, "sensitive_data.txt",
                new ByteArrayInputStream(plaintext), metadata);
        System.out.println(putResult);

        // Retrieves and decrypts the S3 object
        S3Object s3object = s3Old.getObject(bucket, "sensitive_data.txt");
        System.out.println(IOUtils.toString(s3object.getObjectContent()));

In this example, the encrypted one-time data key is stored in the metadata of the S3 object, and the metadata of an S3 object is immutable.

Migrating to AWS KMS

To re-encrypt such a data key using a KMS-managed CMK, you can do so via the putInstructionFile API, like so:


        // Configure to use a migrating material provider that uses your
        // KMS-managed CMK for encrypting all new S3 objects, and provide
        // access to your old client-side master key
        SimpleMaterialProvider migratingMaterialProvider = 
            new SimpleMaterialProvider().withLatest(
                new KMSEncryptionMaterials(customerMasterKeyId))
                .addMaterial(new EncryptionMaterials(clientSideMasterKey));

        AmazonS3EncryptionClient s3Migrate = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                migratingMaterialProvider, config)
            .withRegion(Region.getRegion(Regions.US_EAST_1));

        // Re-encrypt the existing data-key from your client-side master key
        // to your KMS-managed CMK
        PutObjectResult result = s3Migrate.putInstructionFile(
            new PutInstructionFileRequest(
                new S3ObjectId(bucket, "sensitive_data.txt"),
                new KMSEncryptionMaterials(customerMasterKeyId), 
                InstructionFileId.DEFAULT_INSTRUCTION_FILE_SUFFIX));
        System.out.println(result);
        // Data key re-encrypted with your KMS-managed CMK!

Post-Key Migration to AWS KMS

Once the data-key re-encryption is complete for all existing S3 objects (created with the client-side master key), you can then begin to exclusively use the KMS-managed CMK without the client-side master key:


        // Data-key re-encryption is complete. No more client-side master key.
        SimpleMaterialProvider kmsMaterialProvider = 
                new SimpleMaterialProvider().withLatest(
                    new KMSEncryptionMaterials(customerMasterKeyId));
        AmazonS3EncryptionClient s3KmsOnly = new AmazonS3EncryptionClient(
                new ProfileCredentialsProvider(),
                kmsMaterialProvider, config)
            .withRegion(Region.getRegion(Regions.US_EAST_1));

        // Retrieves and decrypts the S3 object
        EncryptedGetObjectRequest getReq = 
            new EncryptedGetObjectRequest(bucket, "sensitive_data.txt")
                .withInstructionFileSuffix(
                    InstructionFileId.DEFAULT_INSTRUCTION_FILE_SUFFIX);
        s3object = s3KmsOnly.getObject(getReq);
        System.out.println(IOUtils.toString(s3object.getObjectContent()));

Why is there the need to use an EncryptedGetObjectRequest in the above example? This is necessary in order to make use of the newly encrypted data key (via KMS) in the instruction file, and not the one (encrypted using your old client-side master key) in the metadata. Of course, had you configured the use of CryptoStorageMode.InstructionFile in the first place, such explicit override during retrieval would not be necessary. You can find such an example (of using the instruction file storage mode) in the earlier blog Taming client-side key rotation with the Amazon S3 encryption client.

That’s all for now. Hope you find this useful. For more S3 encryption options of using AWS KMS, see Amazon S3 Encryption with AWS Key Management Service.

Mapping Cmdlets to AWS Service APIs

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

The consistency of the standardized verb and naming scheme used by Windows PowerShell makes learning the basics of the shell relatively easy, but translating knowledge of an existing API to the standard names can be difficult at first. Starting with version 2.3.19, AWS Tools for Windows PowerShell contains a new cmdlet to help with discovery: Get-AWSCmdletName. This cmdlet accepts the name of an AWS service API and emits the names of cmdlets that invoke an API matching that name pattern. It can also accept an AWS CLI command line and give you back the corresponding PowerShell cmdlet—handy if you are converting an AWS CLI sample.

Discovering Service APIs

Running the PowerShell Get-Command cmdlet with verb and/or noun filtering only gets you so far in discovering the cmdlets that are available in a module. You as a user still need to make the mental leap to associate the verb and noun combination to a known service API. Sometimes this is obvious, sometimes not so much. To get the name of a cmdlet that invokes a known AWS service API is now as easy as:

PS C:> Get-AWSCmdletName -ApiOperation describeinstances

CmdletName              ServiceOperation
----------              ----------------
Get-EC2Instance         DescribeInstances
Get-OPSInstances        DescribeInstances

Note that the full name of the service, and the noun prefix, are displayed in additional columns that are not shown in these examples for brevity.

The parameter name -ApiOperation can be omitted to save typing. You can see from the output that the cmdlet has scanned all cmdlets contained in the AWS PowerShell module and output those that invoke a service API DescribeInstances regardless of the service.

If you know the service of interest, you can restrict the search using the optional -Service parameter:

PS C:> Get-AWSCmdletName describeinstances -Service ec2

CmdletName              ServiceOperation
----------              ----------------
Get-EC2Instance         DescribeInstances

The value supplied to the -Service parameter can be either the prefix code that is applied to the noun part of the name of cmdlets belonging to a service, or one or more words from the service name. For example, these two commands return the same output as the example above:

PS C:> Get-AWSCmdletName describeinstances -Service compute
PS C:> Get-AWSCmdletName describeinstances -Service "compute cloud"

Note that all searches are case insensitive.

If you know the exact name of the service API you are interested in, then you are good to go. But what if you want to find all cmdlets that have something to do with, say, security groups (based on the premise that the term ‘securitygroup’ forms part of the API name)? You might try this:

PS C:> Get-AWSCmdletName securitygroup

As you’ll see if you run the example, the cmdlet displays no output because there is no service API matching that name. What we need is a more flexible way to specify the pattern to match. You can do this by adding the -MatchWithRegex switch:

PS C:> Get-AWSCmdletName securitygroup -MatchWithRegex

CmdletName                              ServiceOperation
----------                              ----------------
Approve-ECCacheSecurityGroupIngress     AuthorizeCacheSecurityGroupIngress
Get-ECCacheSecurityGroup                DescribeCacheSecurityGroups
New-ECCacheSecurityGroup                CreateCacheSecurityGroup
Remove-ECCacheSecurityGroup             DeleteCacheSecurityGroup
Revoke-ECCacheSecurityGroupIngress      RevokeCacheSecurityGroupIngress
Get-EC2SecurityGroup                    DescribeSecurityGroups
Grant-EC2SecurityGroupEgress            AuthorizeSecurityGroupEgress
Grant-EC2SecurityGroupIngress           AuthorizeSecurityGroupIngress
New-EC2SecurityGroup                    CreateSecurityGroup
Remove-EC2SecurityGroup                 DeleteSecurityGroup
Revoke-EC2SecurityGroupEgress           RevokeSecurityGroupEgress
Revoke-EC2SecurityGroupIngress          RevokeSecurityGroupIngress
Join-ELBSecurityGroupToLoadBalancer     ApplySecurityGroupsToLoadBalancer
Enable-RDSDBSecurityGroupIngress        AuthorizeDBSecurityGroupIngress
Get-RDSDBSecurityGroup                  DescribeDBSecurityGroups
New-RDSDBSecurityGroup                  CreateDBSecurityGroup
Remove-RDSDBSecurityGroup               DeleteDBSecurityGroup
Revoke-RDSDBSecurityGroupIngress        RevokeDBSecurityGroupIngress
Approve-RSClusterSecurityGroupIngress   AuthorizeClusterSecurityGroupIngress
Get-RSClusterSecurityGroups             DescribeClusterSecurityGroups
New-RSClusterSecurityGroup              CreateClusterSecurityGroup
Remove-RSClusterSecurityGroup           DeleteClusterSecurityGroup
Revoke-RSClusterSecurityGroupIngress    RevokeClusterSecurityGroupIngress

As you can see its now easy to find all cmdlets that have something to do with a particular term, or object, across all services. When the -MatchWithRegex parameter is used the value of the -ApiOperation parameter is interpreted as a regular expression.

If we wanted to restrict the search to a specific service, we would just add the -Service parameter too, as shown earlier. The -Service parameter value always accepts a regular expression and is not affected by the -MatchWithRegex switch. When looking at the name of the owning service for a cmdlet, Get-AWSCmdletName automatically uses the -Service value as a regular expression, and if that does not yield a match, it then attempts to use the value in a simple text comparison on the service prefix that is used in cmdlet names to effectively namespace the cmdlets.

Translating from AWS CLI

The verb-noun naming standard of PowerShell is considered one of its strengths and one that we are pleased to support to give users a consistent experience. The AWS CLI follows more closely the AWS API naming conventions. Get-AWSCmdletName has one further ability and that is to be able to make a "best effort" at translating an AWS CLI command line to yield the corresponding AWS PowerShell cmdlet. This can be useful when translating a sample:

PS C:> Get-AWSCmdletName -AwsCliCommand "aws ec2 authorize-security-group-ingress"

CmdletName                           ServiceOperation
----------                           ----------------
Grant-EC2SecurityGroupIngress        AuthorizeSecurityGroupIngress

The supplied AWS CLI command is parsed to recover the service identifier and the operation name (which is stripped of any hyphens). You only need to specify enough of the command to allow the service and operation to be identified – the "aws" prefix in the parameter value can be omitted. Also, if you’ve pasted the parameter value from a sample and it contains any CLI options—identified by a ‘–‘ prefix— they are skipped.

Hopefully, you’ll find this new cmdlet useful in discovering and navigating the cmdlets available for working with AWS. Do you have an idea for something that would be useful for you and potentially others? Let us know in the comments!

Automatic Pagination of Responses in the AWS SDK for .NET (Preview)

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

As part of our recent preview release of Resource APIs for .NET we have exposed one of the underlying features in the low-level .NET SDK as well.

Many of the AWS APIs that return collections of items have a pagination interface. Rather than return all results at once, the service returns a certain number of results per request, and provides a token in the response to get the next "page" of results. In this way, you can chain requests together using the token to get as many results as you need.

Here’s what that looks like using the SDK for .NET to get all the IAM users for an account, 20 at a time:

ListUsersResponse response;
ListUsersRequest request = new ListUsersRequest { MaxItems = 20 };

do
{
    response = iam.ListUsers(request);
    ProcessUsers(response.Users);
    request.Marker = response.Marker;
}
while (response.IsTruncated);

In order to make the resource APIs feel more natural, we built in a mechanism that does something like the above code behind the scenes through an IEnumerable interface. Using the resource APIs, you can get the users like this:

var users = iam.GetUsers();
foreach (var user in users)
{
    Console.WriteLine("User: {0}", user.Name);
}

The first line does not result in a service call. No service calls will be made until your code starts iterating over IEnumerable, and subsequent calls will be made as needed under the covers.

This seemed useful to expose through the low-level API as well, so we added methods on some of the clients as part of the SDK Preview for the following clients:

  • Amazon.GlacierClient
  • Amazon.IdentityManagementServiceClient
  • Amazon.OpsWorksClient
  • Amazon.SimpleNotificationServiceClient

Using the paginators from the low-level request interface looks like this:

var users = client.ListUsersEnumerator(new ListUsersRequest { MaxItems = 20 });
foreach(var user in users)
{
    Console.WriteLine("User: {0}", user.Name);
}

As usual with IEnumerable, you will need to pay special attention when using LINQ and/or the System.Linq.Enumerable extension methods. Calling various extensions like .Count(), .Where(), or .Last() on one of the IEnumerables returned by these methods could result in multiple, unintended calls to the service. In those instances where you do need to use those methods, it can be a good idea to cache the IEnumerable returned for as long as possible.

Let us know if you find this facility useful. We look forward to hearing from you on GitHub and the AWS forums.

Announcing V2 of the AWS SDK for Ruby

by Trevor Rowe | on | in Ruby | Permalink | Comments |  Share

I am excited to announce today’s stable release of version 2 of the AWS SDK for Ruby. It is available now as the aws-sdk gem on RubyGems.

Features

Version 2 of the AWS SDK for Ruby, the aws-sdk gem, provides a number of powerful features for developers including:

Upgrading

Version 2 of the AWS SDK for Ruby uses a different namespace, making it possible to use version 1 and version 2 in the same application.

# Gemfile
gem 'aws-sdk', '~> 2'
gem 'aws-sdk-v1'

# code
require 'aws-sdk-v1'
require 'aws-sdk'

ec2_v1 = AWS::EC2.new # v1
ec2_v2 = Aws::EC2::Resource.new # v2

This allows you to start using the version 2 SDK today without changing existing code.

Feedback

Please share your questions, comments, issues, etc. with us on GitHub. You can also catch us in our Gitter channel.

Upcoming Modularization of the AWS SDK for .NET

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Today, I would like to announce our plans to modularize the AWS SDK for .NET into individual assemblies and NuGet packages. This work will take us a few months to complete, but we recognize this will be a pretty big change to how developers see the SDK and want to give as much of a heads-up as we can. It is our intention to have as few breaking changes as possible, but a few will be unavoidable. For those changes, we plan on marking obsolete in the current version SDK as soon as possible. The most notable breaking change will be the removal of Amazon.AWSClientFactory because that class requires a reference to every service.

Why are we doing this?

When we first released the AWS SDK for .NET, there were 10 services and the total size of the SDK was about 600 KB. Today, the SDK has grown to support over 40 services and has grown to over 6 MB. We’ve heard from many of our users that they want a smaller SDK containing just the services they need. This is especially important for developers who are using our SDK for Windows Phone and Windows Store Apps.

Another reason we are doing this is the frequency of releases from AWS. If you take a look at our Nuget package, you can see we release the SDK nearly weekly, sometimes even more frequently. Our hope is that this change will allow developers to update their SDK only when the services they use are updated.

What happens next?

It will take a us a few months to update our build and release process. We’ll keep you updated as more information becomes available. Watch for methods being marked as obsolete, and move away from them as soon as possible. As we are doing all this refactoring, this is a perfect time for feedback from users of the SDK. If there are problems the SDK is not solving for you or things are hard to discover, let us know. You can give feedback either here, in our forums, or on GitHub.