Tag: .NET


Overriding Endpoints in the AWS SDK for .NET

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

Sometimes, when sending requests using the AWS SDK for .NET, you are required to explicitly specify an endpoint URL for a service. One such scenario is when you use an older version of the SDK to send requests to a particular service and that service is introduced in a new region. To access the service in the new region without upgrading the SDK, set the ServiceURL property on the client configuration object. Here’s an example with Amazon S3:

var config = new AmazonS3Config { ServiceURL = myUrl };
var s3client = new AmazonS3Client(config);

This technique overrides the default endpoint for a single instance of the service client. It requires code changes to modify the URL for a region, and requires setup everywhere in the code where a service instance is created.

We recently added a feature to the AWS SDK for .NET version 2 (2.0.7.0 onwards) that allows developers to specify their own mapping of Service + Regions to URLs, which can vary from environment to environment, keeping the code the same. This default mapping is baked into the SDK, but can be overridden either in the App.config or in code.

To point to the override mapping in your App.config, set the AWSEndpointDefinition appSetting:

<appSettings>
   ...
   <add key="AWSEndpointDefinition" value="c:pathtoendpoints.xml"
   ...
</appSettings>

To set the override in code, you can use the AWSConfigs.EndpointDefinition property:

AWSConfigs.EndpointDefinition = @"c:pathtoendpoints.xml";

You can find the most up-to-date version of this file in the Github repository for the SDK. It’s a good idea to start with this file, and then make the needed modifications. It’s also important to note that you need the whole file, not just the endpoints that are different.

When new services and regions are announced, we will update this file along with the SDK.

Response Logging in AWS Tools for Windows PowerShell

by Jim Flanagan | on | in .NET | Permalink | Comments |  Share

As described in an earlier post, the AWS SDK for .NET has support for logging service responses, error responses, and metrics for AWS API calls. For the SDK, this is enabled through the App.config or Web.config file.

The AWS Tools for Windows PowerShell supports a shell variable, named $AWSHistory, that records what cmdlets have been run and the corresponding service response (and optionally request) data. However, until recently developers wanting to use configurable and more detailed diagnostic logging from within the underlying SDK were only able to effect this by editing the configuration file for PowerShell itself (powershell.exe.config)—which affects logging for all PowerShell scripts.

We recently added a cmdlet that makes it possible to configure logging with System.Diagnostics within a script. This cmdlet affects only the currently running script. It will either create simple TextWriterTraceListener instances, or allow you to add custom listeners for the trace sources associated with AWS requests.

First, let’s add a simple text listener:

Add-AWSLoggingListener MyAWSLogs c:logsaws.txt

This listener creates a TextWriterTraceListener that logs error responses from AWS requests to the file c:logsaws.txt. The listener is attached to the source Amazon, which matches all service requests.

If we want to send Amazon S3 errors to a separate log, we could add a second listener:

Add-AWSLoggingListener MyS3Logs c:logss3.txt -Source Amazon.S3

Trace data will go only to the most-specific trace source configured for a listener. In this example, the S3 logs go to s3.txt and all other service logs go to aws.txt.

By default, listeners added in this way will log only error responses. Enabling logging of all responses and/or metrics can be done with a couple of other cmdlets:

Set-AWSResponseLogging Always
Enable-AWSMetricsLogging

These cmdlets affect all listeners added with Add-AWSLoggingListener. Similarly, we can turn those logging levels back down or off:

Set-AWSResponseLogging OnError
Set-AWSResponseLogging Never
Disable-AWSMetricsLogging

Also, we can remove specific listeners from a trace source by name:

Remove-AWSLoggingListener Amazon MyAWSLog

Now, only the S3 logger is active. One way you could use these cmdlets is to enable logging only around a particular section of script.

The Add-AWSLoggingListener cmdlet can also add instances of trace listeners created by other means, such as custom listeners. These statements do the same thing:

Add-AWSLoggingListener -Name MyAWSLog -LogFilePath c:logsaws.txt

$listener = New-Object System.Diagnostics.TextWriterTraceListener c:logsaws.txt
$listener.Name = "MyAWSLog"
Add-AWSLoggingListener -TraceListener $listener

Exposing this facility through the PowerShell cmdlets required adding the ability to programmatically add or remove listeners via the existing AWSConfigs class in the AWS SDK for .NET, in addition to the logging-related configuration items already on that class.

AWSConfigs.AddTraceListener("Amazon.DynamoDB", 
  new TextWriterTraceListener("c:\logs\dynamo.txt", "myDynamoLog"));

Now PowerShell developers have the same access to performance and diagnostic information about AWS API calls as other AWS SDK for .NET users. For more information, refer to the Shell Configuration section of the AWS Tools for Windows PowerShell Cmdlet Reference.

Amazon S3 Transfer Utility for Windows Store and Windows Phone

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

We recently made the Amazon S3 Transfer Utility API in AWS SDK for .NET available for the Windows Store and Windows Phone platforms. TransferUtility is an API that runs on top of the low-level Amazon S3 API and provides utility methods for uploading and downloading files and directories. It includes support for automatic switching to multipart upload for large files, multi-threaded uploads, cancellation of in-progress operations and notifications for transfer progress. The set of TransferUtility API available for Windows Store and Windows Phone platforms includes all the methods available for .NET 3.5 and .NET 4.5 platforms except for the upload/download directory functionality. Another point to note is that these platforms support only the asynchronous APIs.

The code snippets in the following sections show how to upload and download a file using TransferUtility. Notice that we use the IStorageFile type available on Windows Store and Windows Phone platforms. You can use File Pickers available on these platforms to get an instance of IStorageFile. This article provides information on working with File Pickers for the Windows Store platform.

Upload using Transfer Utility

The following code snippet shows the TransferUtility.UploadAsync method being used to upload a file. We use an instance of the TransferUtilityConfig class to change the default values for ConcurrentServiceRequests and MinSizeBeforePartUpload. We have changed the part size to 10 MB for multipart upload using the TransferUtilityUploadRequest.PartSize property. You can also see that we subscribe to TransferUtilityUploadRequest.UploadProgressEvent to receive upload progress notification events.

private const int MB_SIZE = (int)Math.Pow(2, 20);

public async Task UploadFile(IStorageFile storageFile, string bucket, string key, AWSCredentials credentials, CancellationToken cancellationToken)
{
    var s3Client = new AmazonS3Client(credentials,RegionEndpoint.USWest2);
    var transferUtilityConfig = new TransferUtilityConfig
    {
        // Use 5 concurrent requests.
        ConcurrentServiceRequests = 5,

        // Use multipart upload for file size greater 20 MB.
        MinSizeBeforePartUpload = 20 * MB_SIZE,
    };
    using (var transferUtility = new TransferUtility(s3Client, transferUtilityConfig))
    {
        var uploadRequest = new TransferUtilityUploadRequest
        {
            BucketName = bucket,
            Key = key,
            StorageFile = storageFile,

            // Set size of each part for multipart upload to 10 MB
            PartSize = 10 * MB_SIZE
        };
        uploadRequest.UploadProgressEvent += OnUploadProgressEvent;
        await transferUtility.UploadAsync(uploadRequest, cancellationToken);
    }
}

void OnUploadProgressEvent(object sender, UploadProgressArgs e)
{
    // Process progress update events.
}

Download using Transfer Utility

Following is a snippet to download an object from S3 using the TransferUtility.DownloadAsync method. We use the TransferUtilityDownloadRequest.WriteObjectProgressEvent event to suscribe to notifications about the download progress.

public async Task DownloadFile(IStorageFile storageFile, string bucket, string key, AWSCredentials credentials, CancellationToken cancellationToken)
{
    var s3Client = new AmazonS3Client(credentials, RegionEndpoint.USWest2);
    using (var transferUtility = new TransferUtility(s3Client))
    {
        var downloadRequest = new TransferUtilityDownloadRequest
        {
            BucketName = bucket,
            Key = key,
            StorageFile = storageFile
        };
        downloadRequest.WriteObjectProgressEvent += OnWriteObjectProgressEvent;
        await transferUtility.DownloadAsync(downloadRequest, cancellationToken);
    }
}

void OnWriteObjectProgressEvent(object sender, WriteObjectProgressArgs e)
{
    // Process progress update events.
}

In this post, we saw how to use the Amazon S3 Transfer Utility API to upload and download files on the Windows Store and Windows Phone 8 platforms. Try it out, and let us know what you think.

Tagging Amazon EC2 Instances at Launch

by Steve Roberts | on | in .NET | Permalink | Comments |  Share

In this guest post (by James Saull from the AWS Solutions Architects team), we will show how to launch EC2 instances, retrieve the new instances’ IDs, and apply tags to them.

Tagging EC2 instances allows you to assign metadata to instances to facilitate management – especially at scale. Canonical examples include tagging instances to identify which individual or department they belong to or which application they are part of. They are also a useful way to help with cost allocation of resources when it comes time to analyze or apportion the bill. Some organizations consider tagging so important to the management of their infrastructure that they terminate anything that is not appropriately tagged!

It makes good sense to apply the minimum set of tags at the time of launch. More can be applied later if required. In this short post, we will show how to launch and tag EC2 instances.

For simplicity, we will launch a pair of instances using the latest Windows 2012 Base image:

Set-DefaultAWSRegion eu-west-1
$NewInstanceResponse =  
"WINDOWS_2012_BASE" | Get-EC2ImageByName | New-EC2Instance -InstanceType t1.micro -MinCount 2 -MaxCount 2

Now to retrieve the Instance Ids:

$Instances = ($NewInstanceResponse.Instances).InstanceId 

Next we need to compose the collection of tags we wish to apply to these instances. When writing Windows PowerShell scripts, I prefer to avoid using New-Object where reasonable, but I will eschew my personal preferences and demonstrate with and without:

$Tags = @()
$CreatedByTag = New-Object Amazon.EC2.Model.Tag
$CreatedByTag.Key = "CreatedBy"
$CreatedByTag.Value = "James"
$Tags += $CreatedByTag
$DepartmentTag = New-Object Amazon.EC2.Model.Tag
$DepartmentTag.Key = "Department"
$DepartmentTag.Value = "Solutions Architecture"
$Tags += $DepartmentTag 

We can rewrite the above as an array of key-value pairs:

$Tags = @( @{key="CreatedBy";value="James"}, `
           @{key="Department";value="Solutions Architecture"} )

The final step is to apply the tags to the instances we launched:

New-EC2Tag -ResourceId $Instances -Tags $Tags

This can be rewritten using pipes instead:

$Instances | New-EC2Tag -Tags $Tags 

We can now look at our newly tagged instances:

((Get-EC2Instance -Instance $Instances).RunningInstance).Tags

It is tempting to condense this script into a single line, but it might be more robust to code more defensively and check at each stage that failures have not been encountered (e.g., zero instances launched due to reaching an account limit):

(("WINDOWS_2012_BASE" | Get-EC2ImageByName | New-EC2Instance -InstanceType t1.micro -MinCount 2 -MaxCount 2).Instances).InstanceId | New-EC2Tag -Tags $Tags

If you have been following along in your test account and you’ve launched some instances, be sure to terminate them when you no longer need them:

$Instances | Stop-EC2Instance -Terminate -Force 

Requesting feedback on the AWS Toolkit for Visual Studio

by Andrew Fitz Gibbon | on | in .NET | Permalink | Comments |  Share

The AWS Toolkit for Visual Studio provides extensions for Microsoft Visual Studio that make it easier to develop, debug, and deploy .NET applications using Amazon Web Services. We’re constantly working to improve these extensions and provide developers what they need to develop and manage their applications.

To better guide the future of the AWS Toolkit for Visual Studio, we’re reaching out to you for direct feedback. Below is a link to a short survey. It shouldn’t take more than 15 minutes to fill out and your responses will help us bring you a better development experience. Thank you!

Survey: Feedback on the AWS Toolkit for Visual Studio

Using Amazon SQS Dead Letter Queues

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

After Jason Fulghum recently posted a blog entry about using Amazon SQS dead letter queues with the AWS SDK for Java, I thought his post would be interesting for .NET developers as well. Here is Jason’s post with the code replaced with the C# equivalent.

Amazon SQS recently introduced support for dead letter queues. This feature is an important tool to help your applications consume messages from SQS queues in a more resilient way.

Dead letter queues allow you to set a limit on the number of times a message in a queue is processed. Consider an application that consumes messages from a queue and does some sort of processing based on the message. A bug in your application may only be triggered by certain types of messages or when working with certain data in your application. If your application receives one of these messages, it won’t be able to successfully process it and remove it from the queue. Instead, your application will continue to try to process the message again and again. While this message is being continually retried, your queue is likely filling up with other messages, which your application is unable to process because it’s stuck repeatedly processing the bad message.

Amazon SQS dead letter queues enable you to configure your application so that if it can’t successfully process a problematic message and remove it from the queue, that message will be automatically removed from your queue and delivered to a different SQS queue that you’ve designated as a dead letter queue. Another part of your application can then periodically monitor the dead letter queue and alert you if it contains any messages, which you can debug separately.

Using Amazon SQS dead letter queues is easy. You just need to configure a RedrivePolicy on your queue to specify when messages are delivered to a dead letter queue and to which dead letter queue they should be delivered. You can use the AWS Management Console, or you can access the Amazon SQS API directly with the AWS SDK for .NET.

// First, we'll need an Amazon SQS client object.
IAmazonSQS sqs = new AmazonSQSClient(RegionEndpoint.USWest2);

// Create two new queues:
//     one main queue for our application messages
//     and another to use as our dead letter queue
string qUrl = sqs.CreateQueue(new CreateQueueRequest()
{
    QueueName = "MyApplicationQueue"
}).QueueUrl;

string dlqUrl = sqs.CreateQueue(new CreateQueueRequest()
{
    QueueName = "MyDeadLetterQueue"
}).QueueUrl;

// Next, we need to get the the ARN (Amazon Resource Name) of our dead
// letter queue so we can configure our main queue to deliver messages to it.
IDictionary attributes = sqs.GetQueueAttributes(new GetQueueAttributesRequest()
{
    QueueUrl = dlqUrl,
    AttributeNames = new List() { "QueueArn" }
}).Attributes;

string dlqArn = attributes["QueueArn"];

// The last step is setting a RedrivePolicy on our main queue to configure
// it to deliver messages to our dead letter queue if they haven't been
// successfully processed after five attempts.
string redrivePolicy = string.Format(
    "{{"maxReceiveCount":"{0}", "deadLetterTargetArn":"{1}"}}",
    5, dlqArn);

sqs.SetQueueAttributes(new SetQueueAttributesRequest()
{
    QueueUrl = qUrl,
    Attributes = new Dictionary()
    {
        {"RedrivePolicy", redrivePolicy}
    }
});

There’s also a new operation in the Amazon SQS API to help you identify which of your queues are set up to deliver messages to a specific dead letter queue. If you want to know what queues are sending messages to a dead letter queue, just use the IAmazonSQS.ListDeadLetterSourceQueues operation.

IList sourceQueues = sqs.ListDeadLetterSourceQueues(
    new ListDeadLetterSourceQueuesRequest()
    {
        QueueUrl = dlqUrl
    }).QueueUrls;

Console.WriteLine("Source Queues Delivering to " + qUrl);
foreach (string queueUrl in sourceQueues)
{
    Console.WriteLine(" * " + queueUrl);
}

Dead letter queues are a great way to add more resiliency to your queue-based applications. Have you set up any dead letter queues in Amazon SQS yet?

Steve Roberts Interviewed in Episode 255 of the PowerScripting Podcast

by Wade Matveyenko | on | in .NET | Permalink | Comments |  Share

A few weeks ago, Steve Roberts, from the AWS SDK and Tools team for .NET, was pleased to be invited to take part in an episode of the PowerScripting Podcast, chatting with fellow developers about PowerShell here at AWS, the AWS SDK for .NET and other general topics (including his choice of superhero!). The recording of the event has now been published and can be accessed here.

As mentioned in the podcast, a new book has also just been published about using PowerShell with AWS. More details can be found on the publisher’s website at Pro PowerShell for Amazon Web Services.

Amazon DynamoDB Local Integration with AWS Toolkit for Visual Studio

by Norm Johanson | on | in .NET | Permalink | Comments |  Share

Recently, the Amazon DynamoDB team released DynamoDB Local, a great tool for local testing and working disconnected from the Internet. With version 1.6.3 of the AWS Toolkit for Visual Studio, DynamoDB Local was integrated to make it easy to manage your locally running DynamoDB.

In order to run DynamoDB Local, you need at least a JavaSE-1.6-compatible JRE installed, but we recommend 1.7.

Getting Started

To get started with DynamoDB Local

  1. In AWS Explorer, select Local (localhost).

  2. Now right-click on the DynamoDB node and select Connect to DynamoDB Local.

    • If you already have DynamoDB Local running, you can clear the Start new DynamoDB Local process check box. In this case, the toolkit attempts to connect to a currently running DynamoDB Local at the configured port.
    • If you haven’t installed DynamoDB Local yet, you can do that here by selecting the version you want, which is most likely the latest, and click Install. This downloads DynamoDB Local to the folder "dynamodb-local" under your home directory.
  3. Ensure that you have a proper path to Java set for the Java Executable Path and click OK to start a new instance of DynamoDB Local. AWS Explorer refreshes and shows any tables that you might have set up previously.

 

Connecting to DynamoDB Local

To connect to DynamoDB Local using the AWS SDK for .NET, you need to set the ServiceURL property on the AmazonDynamoDBConfig object for the client. Here is an example of setting up the DynamoDB client, assuming DynamoDB Local is running on port 8000.

var config = new AmazonDynamoDBConfig
{
   ServiceURL = "http://localhost:8000/"
}

// Access key and secret key are not required
// when connecting to DynamoDB Local and
// are left empty in this sample.
var client = new AmazonDynamoDBClient("", "", config);

 

IAM Credential Rotation (Access Key Management for .NET Applications – Part 3)

by Milind Gokarn | on | in .NET | Permalink | Comments |  Share

In the previous post in this series, we talked about using IAM users instead of using the root access keys of your AWS account. In this post, we’ll talk about another security best practice, regularly rotating your credentials.

Instead of rotating credentials only when keys are compromised, you should regularly rotate your credentials. If you follow this approach, you’ll have a process in place that takes care of rotating keys if they are compromised, instead of figuring it out when the event takes place. You’ll also have some degree of protection against keys that are compromised without your knowledge, as those keys will only be valid for a certain period, before they are rotated.

We use the following steps for access key rotation to minimize any disruption to running applications:

  • Generate new access key
  • Securely distribute the access key to your applications
  • Disable the old access key
  • Make sure that your applications work with the new key
  • Delete the old access key

Here is the code that performs some of these steps. How you implement distributing the key to your applications and testing the applications is specific to your solution.

var iamClient = new AmazonIdentityManagementServiceClient(ACCESS_KEY, SECRET_KEY, RegionEndpoint.USWest2);
            
// Generate new access key for the current account
var accessKey = iamClient.CreateAccessKey().AccessKey;
	
//
// Store the access key ID (accessKey.AccessKeyId) and 
// secret access key (accessKey.SecretAccessKey)
// securely and distribute it to your applications.
//

// Disable the old access key
iamClient.UpdateAccessKey(new UpdateAccessKeyRequest
{
  AccessKeyId = OLD_ACCESS_KEY_ID,
  Status = StatusType.Inactive
});

// 
// Confirm that your applications pick the new access key
// and work properly using the new key.
//

// Delete the old access key.
iamClient.DeleteAccessKey(new DeleteAccessKeyRequest
{
  AccessKeyId = OLD_ACCESS_KEY_ID
});

If your applications don’t work properly after switching to the new access key, you can always reactivate the old access key (from inactive state) and switch back to it. Only delete the old access keys after testing your applications as they cannot be restored once deleted.

 

New Sample Simple Workflow

When you install the SDK from our website, many samples are installed inside Visual Studio, including the Express editions of Visual Studio. Look in the New Project Wizard, where you’ll find samples showing off many of the AWS services.

 

We recently added a new sample that shows off using Amazon Simple Workflow Service (SWF) with the .NET SDK. The sample is under AWS -> App Services section and is called AWS Simple Workflow Image Processing Sample. The sample shows how to use SWF to monitor images coming from S3 and to generate thumbnails of various sizes. In a real-world scenario, this would most likely be done with multiple processes monitoring SWF for decision and activity tasks. This sample is set up as WPF app hosting virtual consoles, each representing an individual process to make it easier to run the sample.

 

The virtual console on the top is the process that chooses an image to generate thumbnails for and starts the workflow execution.

// Snippet from StartWorkflowExecutionProcessor.cs that starts the workflow execution

swfClient.StartWorkflowExecution(new StartWorkflowExecutionRequest
{
    // Serialize input to a string
    Input = Utils.SerializeToJSON(input),
    //Unique identifier for the execution
    WorkflowId = DateTime.Now.Ticks.ToString(),
    Domain = Constants.ImageProcessingDomain,
    WorkflowType = new WorkflowType
    {
        Name = Constants.ImageProcessingWorkflow,
        Version = Constants.ImageProcessingWorkflowVersion
    }
});

 

The virtual console in the bottom left monitors SWF for decision tasks. When it gets a decision task, it looks at the workflow’s history and sees what activities have been completed to figure out which thumbnail hasn’t be created yet. If one of the thumbnail sizes hasn’t been created yet, it schedules an activity to create the next thumbnail sizes. If all the thumbnails have been created, it completes the workflow.

// Snippet from ImageProcessWorkflow.cs that polls for decision tasks and decides what decisions to make.

void PollAndDecide()
{
    this._console.WriteLine("Image Process Workflow Started");
    while (!_cancellationToken.IsCancellationRequested)
    {
        DecisionTask task = Poll();
        if (!string.IsNullOrEmpty(task.TaskToken))
        {
            // Create the next set of decisions based on the current state and
            // the execution history
            List decisions = Decide(task);

            // Complete the task with the new set of decisions
            CompleteTask(task.TaskToken, decisions);
        }
    }
}

 

The virtual console in the bottom right monitors SWF for activity tasks to perform. The activity task will have input from the decider process that tells what image to create a thumbnail for and what size of thumbnail.

// Snippet from ImageActivityWorker.cs showing the main loop for the worker that polls for tasks and processes them.

void PollAndProcessTasks()
{
    this._console.WriteLine("Image Activity Worker Started");
    while (!_cancellationToken.IsCancellationRequested)
    {
        ActivityTask task = Poll();
        if (!string.IsNullOrEmpty(task.TaskToken))
        {
            ActivityState activityState = ProcessTask(task.Input);
            CompleteTask(task.TaskToken, activityState);
        }
    }
}