Using AWS with COM

Articles & Tutorials>Using AWS with COM
This article investigates options for integrating C++ code with the cloud computing infrastructure from Amazon Web Services.

Details

Submitted By: Craig@AWS
Created On: February 23, 2011 6:53 PM GMT
Last Updated: February 23, 2011 6:53 PM GMT

Where Does It All Begin?

It all begins with the AWS SDK for .NET. The software development kit (SDK) provides an application programming interface (API) that you can use to create and define cloud-based storage systems (Amazon Simple Storage Service—Amazon S3) or use the Amazon infrastructure for computation (Amazon Elastic Compute Cloud—Amazon EC2). Begin by clicking Download AWS .NET SDK. To use specific services like Amazon S3, you need additional registration: Visit the Amazon S3 site, and click Sign Up For Amazon S3.

Err . . . Is This All C# Based? What About My Existing C++ Apps?

Yes, the SDK is all C# based. But it's not like it's the end of the world. You just need to figure out a way to interface the C# code with the C++ client code. All of this brings us to Component Object Model (COM), which you use for interoperability between these two languages.

Okay, So the AWS SDK Is COM Compliant?

That's a perfectly valid question, and the answer is no. There is no direct way by which you can call the Amazon Web Services (AWS) API in your C++ code. But you can create a COM-compliant C# wrapper around the AWS SDK, and then use a C++-based COM client to access the wrapper code. In this article, you upload data from your local machine to the cloud, and then access the data. C# code uses the AWS API to access the cloud: This is your server-side code. The C++ client code communicates its requests to the C# code using COM.

I Have C# and the AWS SDK Installed: What Else Do I Need?

Make sure you have Oleview.exe, tlbexp.exe, regasm.exe, and guidgen.exe installed on your machine.

A C# App That Can Load an Image onto the Cloud

You begin by creating an AmazonS3Client object that puts/gets data to/from the cloud. The AWSClientFactory class provides an easy way to create the client:

using Amazon.S3; 
using Amazon.S3.Model; 

class s3access { 
    AmazonS3 client;
    public void initClient() { 
        client = Amazon.AWSClientFactory.CreateAmazonS3Client(
                    "JKPLJLLXQEBPFLEDTT5Q", // access key
                    "dZSiLzi+y9rpk8D0rPRvXQaQbi7ileM67RnWPOyW"); // secret key
    } 
}

Creating any Amazon client object requires an access and secret key pair. This information is provided as part of the normal registration process with AWS. (Note that you have not used an actual pair, just random strings for demonstration.)

Now that you have created a client, try to upload an image from the local disk on to the cloud. Uploading data to the cloud is a two-step process:

  1. Create a bucket.

    All objects stored in Amazon S3 are contained in buckets. Think of buckets as similar to namespaces where the data is stored. To create a bucket, you instantiate an object of type PutBucketRequest, and then call the PutBacket method of the Amazon S3 client. The PutBucketRequest class is defined as part of the Amazon.S3.Model namespace. Here is the code to create a bucket:

    using Amazon.S3; 
    using Amazon.S3.Model; 
    using System;
    using System.IO;
    
    class s3access { 
        AmazonS3 client;
        public void initClient() { 
            client = Amazon.AWSClientFactory.CreateAmazonS3Client(
                        "JKPLJLLXQEBPFLEDTT5Q", // access key
                        "dZSiLzi+y9rpk8D0rPRvXQaQbi7ileM67RnWPOyW"); // secret key
        } 
        public void createBucket(String bucketName) { 
            try { 
                PutBucketRequest request = new Amazon.S3.Model.PutBucketRequest ();
                request.BucketName = bucketName; // come on, give bucket a name! 
                client.PutBucket(request); // all done, send the request to the cloud
            } catch (AmazonS3Exception e) { 
                Console.WriteLine(e.Message);
            }
        }
    }
    
  2. Create an object request that is for all practical purposes a wrapper around the file you want to upload, and call the PutObject method of the Amazon S3 client:
    class s3access { 
        AmazonS3 client;
       . // code for initClient and createBucket 
        public void uploadData(String bucketName, String fileName) { 
            try { 
                PutObjectRequest data = new PutObjectRequest();
                data.WithFilePath(fileName)
                    .WithBucketName(bucketName)
                    .WithKey("JKPLJLLXQEBPFLEDTT5Q ");
                PutObjectResponse response = client.PutObject(data);
                Console.WriteLine(response.ResponseXml);
            } catch (AmazonS3Exception e) { 
                Console.WriteLine(e.Message);
            } catch (System.IO.FileNotFoundException e) { 
                Console.WriteLine(fileName + ": " + e.Message);
            } 
        }
    }
    

Before making a call to upload the data using the client's PutObject method, first create a PutObjectRequest object, associate the file to upload with this object, and then provide the object with the bucket details of where this file will be stored in the cloud and the access key. That's all there is to it. The PutObject method returns a response, and you can choose to print the underlying XML stream for diagnostics (but this is not necessary for demonstration purposes).

How Do I Know That the Data Was Uploaded?

You need a download routine to copy the data from the cloud to the local machine. The good news—there's a GetObjectRequest class, and the Amazon S3 client has a GetObject method. The bad news—unlike the upload side of things, the download code is not so trivial. Check out the following code:

class s3access { 
    AmazonS3 client;
   . // code for initClient, createBucket, uploadData
    public void downloadData(String bucketName, String fileName) { 
        try { 
            GetObjectRequest data = new GetObjectRequest();
            data.WithBucketName(bucketName)
                   .WithKey("JKPLJLLXQEBPFLEDTT5Q ");
            using (GetObjectResponse response = client.GetObject(data)) { 
	  using (Stream s = response.ResponseStream) {
                    using (FileStream fs = new FileStream(fileName, FileMode.Create, FileAccess.Write)) {
                        byte[] buf = new byte[32768];
                        int bytesRead = 0;
                        do {
                            bytesRead = s.Read(buf, 0, buf.Length);
                            fs.Write(buf, 0, bytesRead);
                        } while (bytesRead > 0);
                        fs.Flush();
                    }
                }
            }
        } catch (AmazonS3Exception e) { 
            Console.WriteLine(e.Message);
        } 
    }
}

The idea is simple enough: Copy the data from the cloud into a local file. To do so, perform the following steps:

  1. Create a new object request (that is, instantiate an object of type GetObjectRequest).
  2. Associate the bucket and access keys with this object using the With method, similar to what you did for the PutObjectRequest.
  3. Call the GetObject method of the Amazon S3 client, which returns an object of type GetObjectResponse.

GetObjectResponse is basically a wrapper over a data stream. So, you must continuously read the data from this stream and copy it the information in a local file until there's nothing more to read from the stream.

Note: If you choose to save multiple images in same bucket, then you also need to know when to stop reading from the stream, because it's a singular data stream here, not file specific. Because you're just using one file here, you don't have to consider the in-depth use of GetObjectResponse any more than what you've already done.

Alright, Looks Fine. What's Next?

Making the C# code COM compliant is next. What does it mean to make the code COM compliant? To begin with, you need to define an interface that the client side code can use to save or copy data. Clearly, all that this interface needs to provide are Save and Copy functions that accept a String argument denoting the file name. The client code doesn't need to know whether the data's on the cloud or in local storage. Here's the interface definition that the client code will eventually have access to:

using System; 

public interface IAWS { // Interface to Amazon Web Services 
     void Save(String source_filename); 
     void Copy(String destination_file_path, String source_file_path); 
} 

Now, you need to extend Amazon S3 access to implement the IAWS interface:

using Amazon.S3;
using Amazon.S3.Model;
using System; 
using System.IO; 

public interface IAWS { // Interface to Amazon Web Services 
     void Save(String source_filename); 
     void Copy(String destination_file_path, String source_file_path); 
} 

public class s3access : IAWS { 
     // . code for initClient, createBucket, uploadData, downloadData
    public void Save(String fileName) { 
        initClient();
        String bucketName = "bucket__" + fileName;  // give a better name!
        createBucket(bucketName);
        uploadData(bucketName, fileName);
    } 
    public void Copy(String destination_file_path, String source_file_path) {
        initClient();
        String bucketName = "bucket__" + source_file_path;
        downloadData(bucketName, destination_file_path);
    }
}

Great, the Server-side Code Is in Place, But How Will the Client Know the Interface Details?

Indeed, that's a concern. You have built a DLL on top of Amazon S3, but for C++ client code that needs access to your interface code, this is not enough. You need to provide the client a way to access the types you have defined and the methods that operate on these types. So, create a type library from your DLL using tlbexp.exe, and then explore its contents using Oleview.exe:

C:\Amazon> tlbexp s3.dll /out:s3.tlb 

The C++ client imports this type library into the code. You can deal with that a bit later when this article discusses the client side of things. Here's the output from Oleview.exe when you try to view the contents of s3.tlb:

[
  uuid(1E0D7FB8-D037-3632-87A4-0751919E44B1),
  version(1.0),
  custom(90883F05-3D28-11D2-8F17-00A0C9A6186D, s3, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null)
]
library s3
{
    // TLib :     // TLib : mscorlib.dll : {BED7F4EA-1A96-11D2-8F08-00A0C9A6186D}
    importlib("mscorlib.tlb");
    // TLib : OLE Automation : {00020430-0000-0000-C000-000000000046}
    importlib("stdole2.tlb");

    // Forward declare all types defined in this typelib
    interface IAWS;
    interface _s3access;

    [
      odl,
      uuid(693A8EF2-F486-309F-A425-2FBE2A908040),
      version(1.0),
      dual,
      oleautomation,
      custom(0F21F359-AB84-41E8-9A78-36D110E6D2F9, IAWS)    
    ]
    interface IAWS : IDispatch {
        [id(0x60020000)]
        HRESULT Save([in] BSTR source_filename);
        [id(0x60020001)]
        HRESULT Copy(
                        [in] BSTR destination_file_path, 
                        [in] BSTR source_file_path);
    };

    [
      uuid(CDD079AB-0C5E-36E1-9F66-3EDD2602BF49),
      version(1.0),
      custom(0F21F359-AB84-41E8-9A78-36D110E6D2F9, s3access)
    ]
    coclass s3access {
        [default] interface _s3access;
        interface _Object;
        interface IAWS;
    };

    [
      odl,
      uuid(0E23DCF1-4F24-35A8-B730-9729FC7CBC4C),
      hidden,
      dual,
      oleautomation,
      custom(0F21F359-AB84-41E8-9A78-36D110E6D2F9, s3access)    

    ]
    interface _s3access : IDispatch {
    };
};

You can see from the above code that interface IAWS and its methods are available for client-side use. Also, notice a couple of other things:

  • The interface IAWS has a unique ID associated with it: 693A8EF2-F486-309F-A425-2FBE2A908040. The client code will use this ID to access the interface.
  • Both functions from the IAWS interface are available in the type library, but their prototypes are slightly different. In particular, your methods were of void type, while these functions return HRESULT. Don't panic: This is standard COM. Calls to these interface methods from the client side are translated into equivalent s3access method calls. The HRESULT return value is checked on the client side to figure out if everything went well. If all's well, the result would be S_OK.
  • The class s3access has a unique ID associated with it: CDD079AB-0C5E-36E1-9F66-3EDD2602BF49.
  • The default interface for s3access is _s3access, which you did not define (so obviously, the system generated it: The interface has a unique ID, too). Clearly, this is unnecessary, because _s3access does not have any functions to export and makes for a difficult debug. So, it makes sense to have IAWS as the default interface.

Let's attack the problem of an extra _s3access interface. Here's what you have done:

using Amazon.S3; 
using Amazon.S3.Model; 
using System;
using System.IO;
using System.Runtime.InteropServices; // this.d be needed for ClassInterface 

public interface IAWS { 
     void Save(String source_filename); 
     void Copy(String destination_file_path, String source_file_path); 
}

[ClassInterface(ClassInterfaceType.None)]
public class s3access : IAWS {
   .. // all your usual code here, nothing changes
} 

The ClassInterface attribute of s3access is set to ClassInterfaceType.None. Rebuild the DLL, and then invoke tlbexp.exe again. Here's the relevant output from s3.tlb with your changes:

    interface IAWS : IDispatch {
        [id(0x60020000)]
        HRESULT Save([in] BSTR source_filename);
        [id(0x60020001)]
        HRESULT Copy(
                        [in] BSTR destination_file_path, 
                        [in] BSTR source_file_path);
    };
    [
      uuid(CDD079AB-0C5E-36E1-9F66-3EDD2602BF49),
      version(1.0),
      custom(0F21F359-AB84-41E8-9A78-36D110E6D2F9, s3access)
    ]
    coclass s3access {
        interface _Object;
        [default] interface IAWS;
    };

The above code clearly shows that the trick worked, and IAWS is now the default interface for s3access.

What have I Done So Far?

So, where do you stand? Here's what you did to reach this point:

  • Created an implementation s3access in C# to upload and download data to/from the cloud
  • Created an interface IAWS and made it the default interface for s3access
  • Played around a bit with tlbexp.exe and Oleview.exe

You still need to define IDs for the s3access class and IAWS interface and register them with the COM system. The COM system works on IDs, not developer-provided names; the client side uses these IDs to access the interface methods. The IDs are declared as class/interface attributes. There are a couple of ways by which you can get the IDs here:

  • Use guidgen.exe to generate globally unique IDs.
  • Use the IDs you see from the output of the type library: They are unique anyway.

Choose option 1 just for the fun of using guidgen.exe. Here's the final look of the server-side code:

using Amazon.S3; 
using Amazon.S3.Model; 
using System;
using System.IO;
using System.Runtime.InteropServices; // this.d be needed for ClassInterface 

[Guid("34DCD99C-9B18-4ec3-93DD-569F0F06BA5E")]
public interface IAWS { 
     void Save(String source_filename); 
     void Copy(String destination_file_path, String source_file_path); 
}

[Guid("874B031F-3748-4aa7-A587-5D60ADD59827")]
[ClassInterface(ClassInterfaceType.None)]
public class s3access : IAWS {
   .. // all your usual code here, nothing changes
} 

For registering the DLL, just use regasm.exe. That's all there is to it:

C:\Amazon> regasm s3.dll /codebase

Are you good to go? Yep. Nothing else needs to be done on the server side.

On to the C++ Side of Things

Consider client code in C++ that has some routine called SaveImage meant to save image data onto a local disk or network storage. Cloud computing has changed all of that; now, the data is stored in the Amazon S3 infrastructure. The client side needs access to the IAWS interface that the server side has exported and uses the Save routine from that interface. That's all there is to it.

Working your way backwards, how does the client side access the IAWS interface? First, you import the s3.tlb file in the client code. Next, make a call to the COM-provided routine CoCreateInstance, with the interface and class IDs exported using the s3.tlb file. The interface is captured in a pointer that CoCreateInstance populates; all further calls on the client side are made using this interface pointer. Here's the client implementation:

#include 
#include 
#include 
#import .C:\\Amazon\\s3.tlb" no_namespace

using namespace std;

int main ( )
{
    CoInitialize(NULL);
    IAWS* wrapper = NULL; 
    const CLSID CLSID_s3access = {0x874B031F, 0x3748, 0x4aa7, 0xA5, \
                             0x87, 0x5D, 0x60, 0xAD, 0xD5, 0x98, 0x27};

    const IID IID_IAWS = {0x34DCD99C, 0x9B18, 0x4ec3, 0x93, \
                             0xDD, 0x56, 0x9F, 0x0F, 0x06, 0xBA, 0x5E};

    HRESULT hr = CoCreateInstance(
            CLSID_s3access, 
            NULL,
            CLSCTX_ALL, 
            IID_IAWS, 
            (void**) &wrapper);

    if (hr != S_OK) { 
        cout << "COM Initialization Failed\n";
        CoUninitialize();
        return 1;
    }

    wrapper->Save(.test.gif.);
    // Copy the image from the cloud to the local machine. The client side only has 
    // information about the file name that was stored. If the client already has a local copy 
    // you would not want to use this code. 
    wrapper->Copy(.copytest.gif., .test.gif.); 
    CoUninitialize();
    return 0;
}

Is There Anything Else I Need to Know?

Well, yes and no, depending on how good you are with COM. Here's a quick checklist:

  • Make sure the client-side code calls CoInitialize before accessing any COM code. Otherwise, all subsequent calls will fail. Period.
  • Before exiting, call CoUnitialize to free up of any system resources that your COM subsystem may be using.
  • Carefully observe how the class and interface IDs are declared in C++. These numbers are too big to fit in primitive types, and the CLSID and IID are types meant to handle COM IDs.
  • Make a DLL out of the server-side code, and the COM system will actually load this DLL in the address space of the client executable, then make the interface pointer available. However, there's nothing stopping from making the server-side code an executable, in which case the COM system will take care of the underlying interprocess communication.
  • Given a random DLL and its type library without much documentation, you need to dig into the type library to understand which interfaces are available and the method signatures thereof. Oleview.exe is your best friend under such difficult circumstances.

Conclusion

This article covered a fair bit of material. You learned about using the Amazon S3 services, how to make COM-compliant C# server code, and how to write C++ client code to access the publicly released C# server interfaces. Be sure to check out the AWS documentation for details on Amazon S3 and other AWS APIs. For COM, the best available reference is the Essential COM by Don Box. Happy walking in the Amazon clouds!

About the Author

Arpan Sen is a lead engineer working on the development of software in the electronic design automation industry. He has worked on several flavors of UNIX, including Solaris, SunOS, HP-UX, and IRIX as well as Linux and Microsoft Windows for several years. He takes a keen interest in software performance-optimization techniques, graph theory, and parallel computing. Arpan holds a post-graduate degree in software systems. You can reach him at arpansen@gmail.com.

©2014, Amazon Web Services, Inc. or its affiliates. All rights reserved.