AWS Developer Tools Blog
The Three Different APIs for Amazon S3
The AWS SDK for .NET has three different APIs to work with Amazon S3. The low-level API found in the Amazon.S3
and Amazon.S3.Model
namespaces provides complete coverage of the S3 APIs. For easy uploads and downloads, there is TransferUtility
, which is found in the Amazon.S3.Transfer
namespace. Finally the File I/O API in the Amazon.S3.IO
namespace gives the ability to use filesystem semantics with S3.
Low-level API
The low-level API uses the same pattern used for other service low-level APIs in the SDK. There is a client object called AmazonS3Client
that implements the IAmazonS3
interface. It contains methods for each of the service operations exposed by S3. Here are examples of performing the basic operations of putting a file in S3 and getting the file back out.
s3Client.PutObject(new PutObjectRequest
{
BucketName = bucketName,
FilePath = @"c:datalog.txt"
});
var getResponse = s3Client.GetObject(new GetObjectRequest
{
BucketName = bucketName,
Key = "log.txt"
});
getResponse.WriteResponseStreamToFile(@"c:datalog-low-level.txt");
TransferUtility
The TransferUtility
runs on top of the low-level API. For putting and getting objects into S3, I would recommend using this API. It is a simple interface for handling the most common uses of S3. The biggest benefit comes with putting objects. For example, TransferUtility
detects if a file is large and switches into multipart upload mode. The multipart upload gives the benefit of better performance as the parts can be uploaded simultaneously as well, and if there is an error, only the individual part has to be retried. Here are examples showing the same operations above in the low-level API.
var transferUtility = new TransferUtility(s3Client);
transferUtility.Upload(@"c:datalog.txt", bucketName);
transferUtility.Download(@"c:datalog-transfer.txt", bucketName, "log.txt");
File I/O
The File I/O API is the third API that you’ll find in the Amazon.S3.IO
namespace. This API is useful for applications that want to treat S3 as a file system. It does this by mimicking the .NET base classes FileInfo
and DirectoryInfo
with the new classes S3FileInfo
and S3DirectoryInfo
. For example, this code shows how similar creating a directory structure in an S3 bucket is to doing so in the local filesystem.
// Create a directory called code at c:code
DirectoryInfo localRoot = new DirectoryInfo(@"C:");
DirectoryInfo localCode = localRoot.CreateSubdirectory("code");
// Create a directory called code in the bucket
S3DirectoryInfo s3Root = new S3DirectoryInfo(s3Client, "bucketofcode");
S3DirectoryInfo codeS3Dir = s3Root.CreateSubdirectory("code");
The following code shows how to get a list of directories and files from the root of the bucket. While going through the enumeration of directories and files all the paging for Amazon S3 calls is handled behind the scenes so there is no need to keep track of a next token.
// Print out the names of the subdirectories under the root directory
foreach (S3DirectoryInfo subDirectory in s3Root.GetDirectories())
{
Console.WriteLine(subDirectory.Name);
}
// Print the names of the files in the root directory
foreach (S3FileInfo file in s3Root.GetFiles())
{
Console.WriteLine(file.Name);
}
To write to a file in Amazon S3, you simply open a stream for write from S3FileInfo
and write to it. Once the stream is closed, the in-memory data for the stream will be committed to Amazon S3. To read the data back from Amazon S3, just open the stream for read from the S3FileInfo
object.
// Write file to Amazon S3
S3DirectoryInfo artDir = s3Root.CreateSubdirectory("asciiart");
S3FileInfo artFile = artDir.GetFile("aws.txt");
using (StreamWriter writer = new StreamWriter(artFile.OpenWrite()))
{
writer.WriteLine(" _____ __ __ _________");
writer.WriteLine(" / _ / / / _____/");
writer.WriteLine(" / /_ // /_____ ");
writer.WriteLine("/ | / / ");
writer.WriteLine("____|____/__/__/ /_________/");
}
// Read file back from Amazon S3
using (StreamReader reader = artFile.OpenText())
{
Console.WriteLine(reader.ReadToEnd());
}