Category: C++


Amazon S3 Encryption Client Now Available for C++ Developers

by Jonathan Henson | on | in C++ | | Comments

My colleague, Conor Campbell, has great news for C++ developers who need to store sensitive information in Amazon S3.

— Jonathan

Many customers have asked for an Amazon S3 Encryption Client that is compatible with the existing Java client, and today we are delighted to provide it. You can now use the AWS SDK for C++ to securely retrieve and store objects, using encryption and decryption, to and from Amazon S3.

The Amazon S3 Encryption Client encrypts your S3 objects using envelope encryption with a master key that you supply and a generated content encryption key. The content encryption key is used to encrypt the body of the S3 object, while the master key is used to encrypt the content encryption key. There are two different types of encryption materials representing the master key that you can use:

  • Simple Encryption Materials. This mode uses AES Key Wrap to encrypt and decrypt the content encryption key.
  • KMS Encryption Materials. This mode uses an AWS KMS customer master key (CMK) to encrypt and decrypt the content encryption key.

In addition, we have provided the ability to specify the crypto mode that controls the encryption for your S3 objects. Currently, we support three crypto modes:

  • Encryption Only. Uses AES-CBC
  • Authenticated Encryption. Uses AES-GCM and allows AES-CTR for Range-Get operations.
  • Strict Authenticated Encryption. This is the most secure option. It uses AES-GCM and does not allow Range-Get operations because we cannot ensure cryptographic integrity protections for the data without verifying the entire authentication tag.

Users can also choose where to store the encryption metadata for the object either in the metadata of the S3 object or in a separate instruction file. The encryption information includes the following:

  • Encrypted content encryption key
  • Initialization vector
  • Crypto tag length
  • Materials description
  • Content encryption key algorithm
  • Key wrap algorithm

The client handles all of the encryption and decryption details under the hood. All you need to upload an object to or download an object from S3 is a simple PUT or GET operation. When calling a GET operation, this client automatically detects which crypto mode and storage method to use for successful decryption– eliminating confusion for selecting an appropriate crypto configuration.

Here are a few examples.


#include <aws/core/auth/AWSCredentialsProviderChain.h>
#include <aws/s3-encryption/S3EncryptionClient.h>
#include <aws/s3-encryption/CryptoConfiguration.h>
#include <aws/s3-encryption/materials/SimpleEncryptionMaterials.h>
#include <fstream>

using namespace Aws::S3;
using namespace Aws::S3::Model;
using namespace Aws::S3Encryption;
using namespace Aws::S3Encryption::Materials;

static const char* const KEY = "s3_encryption_cpp_sample_key";
static const char* const BUCKET = "s3-encryption-cpp-bucket";
static const char* const FILE_NAME = "./localFile";

int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);

    auto masterKey = Aws::Utils::Crypto::SymmetricCipher::GenerateKey();
    auto simpleMaterials = Aws::MakeShared("s3Encryption", masterKey);

    CryptoConfiguration cryptoConfiguration(StorageMethod::METADATA, CryptoMode::AUTHENTICATED_ENCRYPTION);

    auto credentials = Aws::MakeShared<Aws::Auth::DefaultAWSCredentialsProviderChain>("s3Encryption");

    //construct S3 encryption client
    S3EncryptionClient encryptionClient(simpleMaterials, cryptoConfiguration, credentials);

    auto textFile = Aws::MakeShared<Aws::FStream>("s3Encryption", FILE_NAME, std::ios_base::in);
    assert(textFile->is_open());

    //put an encrypted object to S3
    PutObjectRequest putObjectRequest;
    putObjectRequest.WithBucket(BUCKET)
        .WithKey(KEY).SetBody(textFile);

    auto putObjectOutcome = encryptionClient.PutObject(putObjectRequest);
    
    if (putObjectOutcome.IsSuccess())
    {
        std::cout << "Put object succeeded" << std::endl;
    }
    else
    {
        std::cout << "Error while putting Object " << putObjectOutcome.GetError().GetExceptionName() <<
            " " << putObjectOutcome.GetError().GetMessage() << std::endl;
    }

    //get an encrypted object from S3
    GetObjectRequest getRequest;
    getRequest.WithBucket(BUCKET)
        .WithKey(KEY);

    auto getObjectOutcome = encryptionClient.GetObject(getRequest);
    if (getObjectOutcome.IsSuccess())
    {
        std::cout << "Successfully retrieved object from s3 with value: " << std::endl;
        std::cout << getObjectOutcome.GetResult().GetBody().rdbuf() << std::endl << std::endl;;
    }
    else
    {
        std::cout << "Error while getting object " << getObjectOutcome.GetError().GetExceptionName() <<
            " " << getObjectOutcome.GetError().GetMessage() << std::endl;
    }

    Aws::ShutdownAPI(options);
}

In the previous example, we are setting up the Amazon S3 Encryption Client with simple encryption materials, a crypto configuration, and the default AWS credentials provider. The crypto configuration is using the metadata as the storage method and specifying authenticated encryption mode. This encrypts the S3 object with AES-GCM and uses the master key provided within simple encryption materials to encrypt the content encryption key using AES KeyWrap. Then the client will PUT a text file to S3, where it is encrypted and stored. When a GET operation is performed on the object, it is decrypted using the encryption information stored in the metadata, and returns the original text file within the body of the S3 object.

Now, what if we wanted to store our encryption information in a separate instruction file object? And what if we wanted to use AWS KMS for our master key? Maybe we even want to increase the level of security by using strict AES-GCM instead? Well that’s an easy switch, as you can see here.


#include <aws/core/auth/AWSCredentialsProviderChain.h>
#include <aws/s3-encryption/S3EncryptionClient.h>
#include <aws/s3-encryption/CryptoConfiguration.h>
#include <aws/s3-encryption/materials/KMSEncryptionMaterials.h>

using namespace Aws::S3;
using namespace Aws::S3::Model;
using namespace Aws::S3Encryption;
using namespace Aws::S3Encryption::Materials;

static const char* const KEY = "s3_encryption_cpp_sample_key";
static const char* const BUCKET = "s3-encryption-cpp-sample-bucket";
static const char* const CUSTOMER_MASTER_KEY_ID = "ars:some_customer_master_key_id";

int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);

    auto kmsMaterials = Aws::MakeShared<KMSEncryptionMaterials>("s3Encryption", CUSTOMER_MASTER_KEY_ID);

    CryptoConfiguration cryptoConfiguration(StorageMethod::INSTRUCTION_FILE, CryptoMode::STRICT_AUTHENTICATED_ENCRYPTION);

    auto credentials = Aws::MakeShared<Aws::Auth::DefaultAWSCredentialsProviderChain>("s3Encryption");

    //construct S3 encryption client
    S3EncryptionClient encryptionClient(kmsMaterials, cryptoConfiguration, credentials);

    auto requestStream = Aws::MakeShared<Aws::StringStream>("s3Encryption");
    *requestStream << "Hello from the S3 Encryption Client!";

    //put an encrypted object to S3
    PutObjectRequest putObjectRequest;
    putObjectRequest.WithBucket(BUCKET)
        .WithKey(KEY).SetBody(requestStream);

    auto putObjectOutcome = encryptionClient.PutObject(putObjectRequest);

    if (putObjectOutcome.IsSuccess())
    {
        std::cout << "Put object succeeded" << std::endl;
    }
    else
    {
        std::cout << "Error while putting Object " << putObjectOutcome.GetError().GetExceptionName() <<
            " " << putObjectOutcome.GetError().GetMessage() << std::endl;
    }

    //get an encrypted object from S3
    GetObjectRequest getRequest;
    getRequest.WithBucket(BUCKET)
        .WithKey(KEY);

    auto getObjectOutcome = encryptionClient.GetObject(getRequest);
    if (getObjectOutcome.IsSuccess())
    {
        std::cout << "Successfully retrieved object from s3 with value: " << std::endl;
        std::cout << getObjectOutcome.GetResult().GetBody().rdbuf() << std::endl << std::endl;;
    }
    else
    {
        std::cout << "Error while getting object " << getObjectOutcome.GetError().GetExceptionName() <<
            " " << getObjectOutcome.GetError().GetMessage() << std::endl;
    }

    Aws::ShutdownAPI(options);
}

A few caveats:

  • We have not implemented Range-Get operations in Encryption Only mode. Although this is possible, this is a legacy mode and we encourage you to use the Authenticated Encryption mode instead. However, if you do Range-Get operations in this legacy mode, please let us know and we will work on providing them.
  • Currently, Apple does not support AES-GCM with 256 bit keys. As a result Authenticated Encryption and Strict Authenticated Encryption PUT operations do not work on the default Apple build configuration. You can, however, still download objects that were uploaded using Authenticated Encryption because we can use AES-CTR mode for the download. Alternatively you can build the SDK using OpenSSL for full functionality. As soon as Apple makes AES-GCM available for CommonCrypto, we will add more support.
  • We have not yet implemented Upload Part. We will be working on that next. In the meantime, if you use the TransferManager interface with the Amazon S3 Encryption Client, be sure to set the minimum part size to a size that is larger than your largest object.

The documentation for Amazon S3 Encryption Client can be found here.

This package is now available in NuGet.

We hope you enjoy the Amazon S3 Encryption Client. Please leave your feedback on GitHub and feel free to submit pull requests or feature requests.

AWS SDK for C++ Now Available via. NuGet

by Jonathan Henson | on | in C++ | | Comments

C++ has long suffered from the lack of good dependency management solutions. For .NET development, NuGet is one of the most commonly used tools. NuGet solves each of these problems for the Visual Studio C++ development environment:

  • Native Windows developers often have to build against multiple Visual C++ runtimes.
  • Companies often need to distribute flavors of their binaries for each architecture.
  • Building dependency libraries from scratch is time-consuming and difficult to scale across a large enterprise.
  • Globally installed native libraries suffer from versioning problems and make dynamic linking unsafe. Thus developers end up manually sandboxing their artifacts or statically linking.
  • Distributing dynamic libraries with an application is often error prone and unsafe.

Last week, we started pushing the AWS SDK for C++ to NuGet.org as part of our release process. Let’s look at an example:

First, let’s create a simple Visual C++ Win32 Console Application. For the main function, let’s just write out “Hello World!”

We want to integrate our application with Amazon Simple Storage Service (Amazon S3). To do this, right-click the project in the Solution Explorer, and select Manage NuGet Packages.

Next, we’ll select the package to use. In this case, we’ll search the keywords aws s3 native. Alternatively, we could search for “AWSSDKCPP-S3” because each of the AWS SDK for C++ packages follow the AWSSDKCPP-<service name> naming convention.

After we have installed the package, we can simply start using it. You’ll see that IntelliSense is already updated with the new include path.

Instead of worrying about setting up linker paths for each runtime and architecture configuration, our project will just build using the correct binaries.

Finally, you’ll see how our project page is already updated with our new dependencies.

We provide prebuilt Debug and Release binaries for Visual Studio 2013 and Visual Studio 2015 to target the 32-bit and 64-bit platforms.

We hope this makes native Windows development with AWS easier and more efficient. Please feel free to leave your feedback here or on GitHub.

Symmetric Encryption/Decryption in the AWS SDK for C++ with std::iostream

by Jonathan Henson | on | in C++ | | Comments

Cryptography is hard in any programming language. It is especially difficult in platform-portable native code where we don’t have the advantage of a constant platform implementation. Many customers have asked us for an Amazon S3 encryption client that is compatible with the Java and Ruby clients. Although we are not ready to release that yet, we have created a way to handle encryption and decryption across each of our supported platforms: the *nix variations that rely mostly on OpenSSL; Apple, which provides CommonCrypto; and Windows which exposes the BCrypt/CNG library. Each of these libraries provide dramatically different interfaces, so we had to do a lot of work to force them into a common interface and usage pattern. As of today, you can use the AWS SDK for C++ to encrypt or decrypt your data with AES 256-bit ciphers in the CBC, CTR, and GCM modes. You can use the ciphers directly, or you can use the std::iostream interface. This new feature is valuable even without the high-level client we are hoping to provide soon, so we are excited to tell you about it.

Let’s look at a few use cases:

Suppose we want to write sensitive data to file using a std::ofstream and have it encrypted on disk. However, when we read the file back, we want to parse and use the file in a std::ifstream as plaintext. Because we use AES-GCM in this example, after encryption, the tag and iv must be stored somewhere in order for the data to be decrypted. For this example, we will simply store them into memory so we can pass them to the decryptor.


#include <aws/core/Aws.h>
#include <aws/core/utils/crypto/CryptoStream.h>
#include <fstream>

using namespace Aws::Utils;
using namespace Aws::Utils::Crypto;

int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);

    //create 256 bit symmetric key. This will use the entropy from your platform's crypto implementation
    auto symmetricKey = SymmetricCipher::GenerateKey();
    
    //create an AES-256 GCM cipher, iv will be autogenerated
    auto encryptionCipher = CreateAES_GCMImplementation(symmetricKey);

    const char* fileToEncrypt = "./encryptedSensitiveData";

    CryptoBuffer iv, tag;

    //write the file to disk and encrypt it
    {        
        //create the stream to receive the encrypted data.
        Aws::OFStream outputStream(fileName, std::ios_base::out | std::ios_base::binary);

        //now create an encryption stream.
        SymmetricCryptoStream encryptionStream(outputStream, CipherMode::Encrypt, cipher);
        encryptionStream << "This is a file full of sensitive customer secrets:\n\n";
        encryptionStream << "CustomerName=John Smith\n";
        encryptionStream << "CustomerSSN=867-5309\n";
        encryptionStream << "CustomerDOB=1 January 1970\n\n";
    }
           
     //grab the IV that was used to initialize the cipher
     auto iv = encryptionCipher->GetIV();

     //since this is GCM, grab the tag once the stream is finished and the cipher is finalized
     auto tag = encryptionCipher->GetTag();

    //read the file back from disk and deal with it as plain-text
    {
         //create an AES-256 GCM cipher, passing the key, iv and tag that were used for encryption
         auto decryptionCipher = CreateAES_GCMImplementation(symmetricKey, iv, tag);

         //create source stream to decrypt from
         Aws::IFStream inputStream(fileName, std::ios_base::in | std::ios_base::binary);         

         //create a decryption stream
         SymmetricCryptoStream decryptionStream(inputStream, CipherMode::Decrypt, cipher);

         //write the file out to cout using normal stream operations
         Aws::String line;
         while(std::getline(stream, line))
         {
             std::cout << line << std::endl;
         }
    }        

    Aws::ShutdownAPI(options);
    return 0;
}

What if this time we want to put the encrypted data from plaintext on local disk into Amazon S3? We want it to be encrypted in Amazon S3, but we want to write it to disk as plaintext when we download it. This example code is not compatible with the existing Amazon S3 encryption clients; a compatible client is coming soon.


#include <aws/core/Aws.h>
#include <aws/core/utils/crypto/CryptoStream.h>
#include <aws/s3/S3Client.h>
#include <aws/s3/model/PutObjectRequest.h>
#include <aws/s3/model/GetObjectRequest.h>

using namespace Aws::Utils;
using namespace Aws::Utils::Crypto;
using namespace Aws::S3;

int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);

    //create 256 bit symmetric key. This will use the entropy from your platform's crypto implementation
    auto symmetricKey = SymmetricCipher::GenerateKey();    

    const char* fileName = "./localFile";
    const char* s3Key = "encryptedSensitiveData";
    const char* s3Bucket = "s3-cpp-sample-bucket";

    //create an AES-256 CTR cipher
    auto encryptionCipher = CreateAES_CTRImplementation(symmetricKey);
        
    //create an S3 client
    S3Client client;
        
    //Put object into S3
    {
        //put object
        Model::PutObjectRequest putObjectRequest;
        putObjectRequest.WithBucket(s3Bucket)
                .WithKey(s3Key);

        //source stream for file we want to put encrypted into S3
        Aws::IFStream inputFile(fileName);

        //set the body to a crypto stream and we will encrypt in transit
        putObjectRequest.SetBody(
                Aws::MakeShared<CryptoStream>("crypto-sample", inputFile, CipherMode::Encrypt, *encryptionCipher));

        auto putObjectOutcome = client.PutObject(putObjectRequest);

        if(!putObjectOutcome.IsSuccess())
        {
            std::cout << putObjectOutcome.GetError().GetMessage() << std::endl;
        }
     }
      
     auto iv = GetIV();

     //create cipher to use for decryption of AES-256 in CTR mode.
     auto decryptionCipher = CreateAES_CTRImplementation(symmetricKey, iv);     

     {
        //get the object back out of s3
        Model::GetObjectRequest getObjectRequest;
        getObjectRequest.WithBucket(s3Bucket)
                .WithKey(s3Key);

        //destination stream for the decrypted result to be written to.
        Aws::OFStream outputFile(fileName);

        //tell the client to create a crypto stream that knows how to decrypt the data as it comes across the wire.
        // write the decrypted output to outputFile.
        getObjectRequest.SetResponseStreamFactory(
                [&] { return Aws::New<CryptoStream>("crypto-sample", outputFile, CipherMode::Decrypt, *decryptionCipher);}
        );

        auto getObjectOutcome = client.GetObject(getObjectRequest);

        if(!getObjectOutcome.IsSuccess())
        {
            std::cout << getObjectOutcome.GetError().GetMessage() << std::endl;
            return -1;
        }
        //the file should now be stored decrypted at fileName        
    }

    Aws::ShutdownAPI(options);

    return 0;
}

Unfortunately, this doesn’t solve the problem of securely transmitting or storing symmetric keys. Soon we will provide a fully automated encryption client that will handle everything, but for now, you can use AWS Key Management Service (AWS KMS) to encrypt and store your encryption keys.

A couple of things to keep in mind:

  • You do not need to use the Streams interface directly. The SymmetricCipher interface provides everything you need for cross-platform encryption and decryption operations. The streams are mostly for convenience and interoperating with the web request process.
  • When you use the stream in sink mode, you either need to explicitly call Finalize() or make sure the destructor of the stream has been called. Finalize() makes sure the cipher has been finalized and all hashes (in GCM mode) have been computed. If this method has not been called, the tag from the cipher may not be accurate when you try to pull it.
  • Because encrypted data is binary data, be sure to use the correct stream flags for binary data. Anything that depends on null terminated c-strings can create problems. Use Strings and String-Streams only when you know that the data is plaintext. For all other scenarios, use the non-formatted input and output operations on the stream.
  • AES-GCM with 256-bit keys is not implemented for Apple platforms. CommonCrypto does not expose this cipher mode in its API. As soon as Apple adds AES-GCM to its public interface, we will provide an implementation for that platform. If you need AES-GCM on Apple, you can use our OpenSSL implementation.
  • Try to avoid seek operations. When you seek backwards, we have to reset the cipher and re-encrypt everything up to your seek position. For S3 PutObject operations, we recommend that you set the Content-Length ahead of time instead of letting us compute it automatically. Be sure to turn off payload hashing.

We hope you enjoy this feature. We look forward to seeing the ways people use it. Please leave your feedback on GitHub, and as always, feel free to send us pull requests.

AWS SDK for C++: Simplified Configuration and Initialization

by Jonathan Henson | on | in C++ | | Comments

Many of our users are confused by initializing and installing a memory manager, enabling logging, overriding the HTTP stack, and installing custom cryptography implementations. Not only are these tasks confusing, they are tedious and require an API call to set up and tear down each component. To make matters worse, on some platforms, we were silently initializing libCurl and OpenSSL. This caused the AWS SDK for C++ to mutate static global state, creating problems for the programs that relied on it.

As of version 0.12.x, we have added a new initialization and shutdown process. We have added the structure Aws::SDKOptions to store each of the SDK-wide configuration options. You can use SDKOptions to set a memory manager, turn on logging, provide a custom logger, override the HTTP implementation, and install your own cryptography implementation at runtime. By default, this structure has all of the settings you are used to, but manipulating those options should now be more clear and accessible.

This change has a few side effects.

  • First, the HTTP factory is now globally installed instead of being passed to service-client constructors. It doesn’t really make sense to force the HTTP client factory through each client; if this setting is being customized, it will be used across the SDK.
  • Second, you can turn on logging simply by setting the logging level.
  • Third, if your application is already using OpenSSL or libCurl, you have the option of bypassing their initialization in the SDK entirely. This is particularly useful for legacy applications that need to interoperate with the AWS SDK.
  • Finally, all code using the SDK must call the Aws::InitAPI() function before making any other API calls and the Aws::ShutdownAPI() function when finished using the SDK. If you do not call these new functions, your application may crash for all SDK versions later than 0.12.x.

Here are a few recipes:


#include <aws/core/Aws.h>

Just use the default configuration:


   SDKOptions options;
   Aws::InitAPI(options);
   //make your SDK calls in here.
   Aws::ShutdownAPI(options);

Turn logging on using the default logger:


   SDKOptions options;
   options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Info;
   Aws::InitAPI(options);
   //make your SDK calls in here.
   Aws::ShutdownAPI(options);

Install a custom memory manager:


    MyMemoryManager memoryManager;

    SDKOptions options;
    options.memoryManagementOptions.memoryManager = &memoryManager;
    Aws::InitAPI(options);
    //make your SDK calls in here.
    Aws::ShutdownAPI(options);

Override the default HTTP client factory:


    SDKOptions options;
    options.httpOptions.httpClientFactory_create_fn = [](){ return Aws::MakeShared<MyCustomHttpClientFactory>("ALLOC_TAG", arg1); };
    Aws::InitAPI(options);
    //make your SDK calls in here
    Aws::ShutdownAPI(options);

Note: SDKOptions::HttpOptions takes a closure instead of a std::shared_ptr. We do this for all of the factory functions because the memory manager will not have been installed at the time you will need to allocate this memory. You pass your closure to the SDK, and it is called when it is safe to do so. This simplest way to do this is with a Lambda expression.

As we get ready for General Availability, we wanted to refine our initialization and shutdown scheme to be flexible for future feature iterations and new platforms. This update will allow us to provide new features without breaking users. We welcome your feedback about how we can improve this feature.

Using a Thread Pool with the AWS SDK for C++

by Jonathan Henson | on | in C++ | | Comments

The default thread executor implementation we provide for asynchronous operations spins up a thread and then detaches it. On modern operating systems, this is often exactly what we want. However, there are some other use cases for which this simply will not work. For example, suppose we want to fire off asynchronous calls to Amazon Kinesis as quickly as we receive an event. Then suppose that we sometimes receive these events at a rate of 10 per millisecond. Even if we are calling Amazon Kinesis from an Amazon Elastic Compute Cloud (EC2) instance in the same data center as our Amazon Kinesis stream, the latency will eventually cause the number of threads on our system to bloat and possibly exhaust.

Here is an example of what this code might look like:


#include <aws/kinesis/model/PutRecordsRequest.h>
#include <aws/kinesis/KinesisClient.h>
#include <aws/core/utils/Outcome.h>
#include <aws/core/utils/memory/AWSMemory.h>
#include <aws/core/Aws.h>

using namespace Aws::Client;
using namespace Aws::Utils;
using namespace Aws::Kinesis;
using namespace Aws::Kinesis::Model;

class KinesisProducer
{
public:
    KinesisProducer(const Aws::String& streamName, const Aws::String& partition) : m_partition(partition), m_streamName(streamName)
    {
        ClientConfiguration clientConfiguration;
        m_client = Aws::New<KinesisClient>("kinesis-sample", clientConfiguration);
    }

    ~KinesisProducer()
    {
        Aws::Delete(m_client);
    }

    void StreamData(const Aws::Vector<ByteBuffer>& data)
    {
        PutRecordsRequest putRecordsRequest;
        putRecordsRequest.SetStreamName(m_streamName);

        for(auto& datum : data)
        {
            PutRecordsRequestEntry putRecordsRequestEntry;
            putRecordsRequestEntry.WithData(datum)
                    .WithPartitionKey(m_partition);

            putRecordsRequest.AddRecords(putRecordsRequestEntry);
        }

        m_client->PutRecordsAsync(putRecordsRequest,
               std::bind(&KinesisProducer::OnPutRecordsAsyncOutcomeReceived, this, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3, std::placeholders::_4));
    }

private:
    void OnPutRecordsAsyncOutcomeReceived(const KinesisClient*, const Model::PutRecordsRequest&,
                                          const Model::PutRecordsOutcome& outcome, const std::shared_ptr<const Aws::Client::AsyncCallerContext>&)
    {
        if(outcome.IsSuccess())
        {
            std::cout << "Records Put Successfully " << std::endl;
        }
        else
        {
            std::cout << "Put Records Failed with error " << outcome.GetError().GetMessage() << std::endl;
        }
    }

    KinesisClient* m_client;
    Aws::String m_partition;
    Aws::String m_streamName;
};


int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);
    KinesisProducer producer("kinesis-sample", "announcements");

    while(true)
    {
        Aws::String event1("Event #1");
        Aws::String event2("Event #2");

        producer.StreamData( {
                                     ByteBuffer((unsigned char*)event1.c_str(), event1.length()),
                                     ByteBuffer((unsigned char*)event2.c_str(), event2.length())
                             });
    }

    Aws::ShutdownAPI(options);
    return 0;
}


This example is intended to show how exhausting the available threads from the operating system will ultimately result in a program crash. Most systems with this problem would be bursty and would not create such a sustained load. Still, we need a better way to handle our threads for such a scenario.

This week, we released a thread pool executor implementation. Simply include the aws/core/utils/threading/Executor.h file. The class name is PooledThreadExecutor. You can set two options: the number of threads for the pool to use and the overflow policy.

Currently, there are two overflow policy modes:

QUEUE_TASKS_EVENLY_ACROSS_THREADS will allow you to push as many tasks as you want to the executor. It will make sure tasks are queued and pulled by each thread as quickly as possible. For most cases, QUEUE_TASKS_EVENLY_ACROSS_THREADS is the preferred option.

REJECT_IMMEDIATELY will reject the task submission if the queued task length ever exceeds the size of the thread pool.

Let’s revise our example to use a thread pool:


#include <aws/kinesis/model/PutRecordsRequest.h>
#include <aws/kinesis/KinesisClient.h>
#include <aws/core/utils/Outcome.h>
#include <aws/core/utils/memory/AWSMemory.h>
#include <aws/core/utils/threading/Executor.h>

using namespace Aws::Client;
using namespace Aws::Kinesis;
using namespace Aws::Kinesis::Model;

class KinesisProducer
{
public:
    KinesisProducer(const Aws::String& streamName, const Aws::String& partition) : m_partition(partition), m_streamName(streamName)
    {
        ClientConfiguration clientConfiguration;
        clientConfiguration.executor = Aws::MakeShared<PooledThreadExecutor>("kinesis-sample", 10);
        m_client = Aws::New<KinesisClient>("kinesis-sample", clientConfiguration);
    }

    ....

The only change we need to make to add the thread pool to our configuration is to assign an instance of the new executor implementation to our ClientConfiguration object.

As always, we welcome your feedback –and even pull requests– about how we can improve this feature.

Using CMake Exports with the AWS SDK for C++

by Jonathan Henson | on | in C++ | | Comments

This is our very first C++ blog post for the AWS Developer blog. There will be more to come. We are excited to receive and share feedback with the C++ community. This first post will start where most projects start, with the building of a simple program.

Building an application in C++ can be a daunting task—especially when dependencies are involved. Even after you have figured out what you want to do and which libraries you need to use, you encounter seemingly endless, painful tasks to compile, link, and distribute your application.

AWS SDK for C++ users most frequently report the difficulty of compiling and linking against the SDK. This involves building the SDK, installing the header files and libraries somewhere, updating the build system of the application with the include and linker paths, and passing definitions to the compiler. This is an error-prone– and now unnecessary– process. CMake has built-in functionality that will handle this scenario. We have now updated the CMake build scripts to handle this complexity for you.

The example we will use in this post assumes you are familiar with Amazon Simple Storage Service (Amazon S3), and know how to download and build the SDK. For more information, see our readme on github. If we want to write a simple program to upload and retrieve objects from Amazon S3. The code would look something like this:


#include <aws/s3/S3Client.h>
#include <aws/s3/model/PutObjectRequest.h>
#include <aws/s3/model/GetObjectRequest.h>
#include <aws/core/Aws.h>
#include <aws/core/utils/memory/stl/AWSStringStream.h> 

using namespace Aws::S3;
using namespace Aws::S3::Model;

static const char* KEY = "s3_cpp_sample_key";
static const char* BUCKET = "s3-cpp-sample-bucket";

int main()
{
    Aws::SDKOptions options;
    Aws::InitAPI(options);
    S3Client client;
    
    //first put an object into s3
    PutObjectRequest putObjectRequest;
    putObjectRequest.WithKey(KEY)
           .WithBucket(BUCKET);

    //this can be any arbitrary stream (e.g. fstream, stringstream etc...)
    auto requestStream = Aws::MakeShared<Aws::StringStream>("s3-sample");
    *requestStream << "Hello World!";
    
    //set the stream that will be put to s3
    putObjectRequest.SetBody(requestStream);

    auto putObjectOutcome = client.PutObject(putObjectRequest);

    if(putObjectOutcome.IsSuccess())
    {
        std::cout << "Put object succeeded" << std::endl;
    }
    else
    {
        std::cout << "Error while putting Object " << putObjectOutcome.GetError().GetExceptionName() << 
               " " << putObjectOutcome.GetError().GetMessage() << std::endl;
    }

    //now get the object back out of s3. The response stream can be overridden here if you want it to go directly to 
    // a file. In this case the default string buf is exactly what we want.
    GetObjectRequest getObjectRequest;
    getObjectRequest.WithBucket(BUCKET)
        .WithKey(KEY);

    auto getObjectOutcome = client.GetObject(getObjectRequest);

    if(getObjectOutcome.IsSuccess())
    {
        std::cout << "Successfully retrieved object from s3 with value: " << std::endl;
        std::cout << getObjectOutcome.GetResult().GetBody().rdbuf() << std::endl << std::endl;;  
    }
    else
    {
        std::cout << "Error while getting object " << getObjectOutcome.GetError().GetExceptionName() <<
             " " << getObjectOutcome.GetError().GetMessage() << std::endl;
    }

    Aws::ShutdownAPI(options);
    return 0;  
}

Here, we have a direct dependency on aws-cpp-sdk-s3 and an indirect dependency on aws-cpp-sdk-core. Furthermore, we have several platform-specific dependencies that are required to make this work. On Windows, this involves WinHttp and BCrypt. On Linux, curl and OpenSSL. On OSX, curl and CommonCrypto. Other platforms, such as mobile, have their own dependencies. Traditionally, you would need to update your build system to detect each of these platforms and inject the right properties for each target.

However, the build process for the SDK already has access to this information from its configuration step. Why should you have to worry about this mess? Enter CMake export(). What would a CMakeLists.txt look like to build this program? This file generates our build artifacts for each platform we need to support—Visual Studio, XCode, AutoMake, and so on.


cmake_minimum_required(VERSION 2.8)
project(s3-sample)

#this will locate the aws sdk for c++ package so that we can use its targets
find_package(aws-sdk-cpp)

add_executable(s3-sample main.cpp)

#since we called find_package(), this will resolve all dependencies, header files, and cflags necessary
#to build and link your executable. 
target_link_libraries(s3-sample aws-cpp-sdk-s3)

That’s all you need to build your program. When we run this script for Visual Studio, CMake will determine that aws-cpp-sdk-s3 has dependencies on aws-cpp-sdk-core, WinHttp, and BCrypt. Also, the CMake configuration for the aws-sdk-cpp package knows whether the SDK was built using custom memory management. It will make sure the –DAWS_CUSTOM_MEMORY_MANAGEMENT flag is passed to your compiler if needed. The resulting Visual Studio projects will already have the include and linker arguments set and will contain compile definitions that need to be passed to your compiler. On GCC and Clang, we will also go ahead and pass you the –std=c++11 flag.

To configure your project, simply run the following:


cmake –Daws-sdk-cpp_DIR=<path to your SDK build> <path to your source>

You can pass additional CMake arguments , such as –G “Visual Studio 12 2013 Win64” too.

Now you are ready to build with msbuild, make, or whatever other build system you are using.

Obviously, not everyone uses or even wants to use CMake. The aws-sdk-cpp-config.cmake file will contain all of the information required to update your build script to use the SDK.

We’d like to extend a special thanks to our GitHub users for requesting this feature, especially Rico Huijbers who shared a blog post on the topic. His original post can be found here

We are excited to be offering better support to the C++ community. We invite you to try this feature and leave feedback here or on GitHub.