Using Adobe Flex and AIR to Store Data in Amazon S3

Articles & Tutorials>Using Adobe Flex and AIR to Store Data in Amazon S3
Dan Orlando walks through a sample photo gallery application using Adobe Flex to demonstrate how to store data in Amazon S3.

Details

Submitted By: Craig@AWS
AWS Products Used: Amazon S3
Language(s): ActionScript
Created On: April 29, 2009 3:13 PM GMT
Last Updated: April 29, 2009 3:30 PM GMT

By Dan Orlando

Most Adobe Flex and AIR applications require one or more server-side business tiers that carry out services for messaging, data manipulation, and file management. However, by the time you are finished reading this article, you will know how to build useful desktop applications with Flex and AIR that employ the Amazon Simple Storage Service (Amazon S3) for storing data in place of a multi-tiered server infrastructure.

The Application

A significant amount of time and effort went into this sample application, because I wanted to provide an example that combines the powerful capabilities of Flex and AIR with Amazon S3 to demonstrate something truly unique and useful. In short, the application you will be building is a utility for managing images in an Amazon S3 "bucket" that you own. However, you could easily extend the example to emulate a file-management application that integrates seamlessly with the local file system—similar to mounting a remote drive on your local system by tunneling through a virtual private network (VPN).

You can download the application source code from http://awscode.s3.amazonaws.com/S3ImageViewer.zip.

When the application is first initialized, a window similar to Figure 1 appears. Upon typing your Amazon access ID, private key, and bucket name, the application springs to life, as seen in Figure 2.


Figure 1. The application must have your Amazon S3 account information before it can do anything.


Figure 2. The application comes to life after you have entered the information.

The image thumbnails displayed in the horizontal list at the bottom of the application represent the images in my bucket, which I named galleryassets. After you have set up your Amazon S3 account, I recommend installing the Amazon S3 Firefox plug-in before doing anything else. The easiest way to create your own bucket is to click the little folder icon in the Amazon S3 organizer window after you have it installed. Because bucket names are actually global namespaces on the Amazon S3 system, you will need to name your bucket something other than galleryassets, because that namespace belongs to me now. After you've created your bucket, just leave it empty. You're going to fill it up in just a minute.

Tools and Code Libraries

If you haven't already done so, download the code for this tutorial now. In Adobe Flex Builder, navigate to File > import > Flex Project. You should be able to simply extract the download and select the S3ImageViewer directory as the project folder. I've left all of the project properties files as is for easy importing, and I also included the libraries needed for AIR to work with Amazon S3, including CoreLib, Crypto, and AWSs3Lib.

Note: AWSs3Lib is currently a work in progress, and I have added a few classes to the com.adobe.webapis.awss3 package so that it would support HTTP POST functionality.

Fill Your Bucket

Before you dive into the code, launch the S3ImageViewer project from Flex so that you can make sure its all working. After you enter your access ID, secret key, and bucket name, the authentication window disappears; but you still won't have anything in your thumbnail browser. Assuming that you have some JPEG images somewhere on your computer, open a file browser and drag and drop your JPEG images onto the area of the application where the thumbnails should appear. You should immediately begin seeing thumbnails of your images. You do not have to drag them over one at a time, either. Try grabbing a whole group of images and dragging them over.

Now, I show you how you were just able to do that in the code.

Application File Structure

As you see in Figure 3, the danorlando package (open in Figure 3) contains the application-specific code, while the adobe and hurlant directories (closed in Figure 3) contain the CoreLib and Crypto libraries, respectively. The danorlando package contains five directories: controller, events, model, util, and view. The structure of the application code implements the Model-View-Controller (MVC) architectural design pattern. MVC allows for the logical separation of code, yet remains flexible enough to support the implementation of structural, behavioral, and creational design patterns as needed based on the complexity of what you are building. What I like about MVC is that you can integrate as many additional design patterns into it as you need for your application, and it is flexible enough to adapt.


Figure 3. The danorlando package implements a flexible MVC design pattern.

As you peruse the code, you will find a number of additional design patterns in use. However, the code has been abstracted and organized in such a way that I talk primarily about one class in this article: the UIController class, found in the com.danorlando.controller package.

Application Flow

The application is not fully initialized until you enter the authentication parameters into the authentication window and click Submit. The simple flow diagram in Figure 4 shows the initialization life cycle.


Figure 4. The application initialization life cycle

As you see in Figure 4, the process starts when the UIController.authenticate method is called. This method instantiates the AWSS3 class and passes it the Amazon S3 accessID and secretKey values for temporary storage so that the user does not have to re-enter these parameters every time the application wants to make a new request.

The code for the UIController.authenticate method is as follows:

/**
* Called when the submit button is selected from the authentication window
* that is displayed on application initialization. This creates the initial
* connection to AWS S3.
*
* @param accessID The S3 Access ID provided with your S3 account
* @param secretKey The secret private key assigned to your S3 account
* @param bucketName The name of the bucket that the application should connect to
*
*/
public function authenticate(accessID:String, secretKey:String, bucketName:String):void {
this.bucket = bucketName;
this.accessId = accessID;
this.secretKey = secretKey;
//instantiate the AWS S3 API
_awsAPI = new AWSS3(accessID,secretKey);
//add listeners for the events that we are interested in
_awsAPI.addEventListener(AWSS3Event.LIST_OBJECTS, listObjectsHandler);
_awsAPI.addEventListener(AWSS3Event.ERROR, awsErrorHandler);
_awsAPI.addEventListener(AWSS3Event.OBJECT_RETRIEVED, objectRetrievedHandler);
_awsAPI.listObjects(this._bucket);
//create a temp directory for creating temp copies of images
_tempDir = File.createTempDirectory();
}

Note: The authentication parameters could also have been saved in an encrypted local store and pulled into memory automatically the next time the application was started, thus avoiding the need to show the authentication window again. However, this setup would also require additional functionality to allow for switching Amazon S3 accounts and buckets, as doing so would no longer be possible simply by quitting the application and starting it again.

After the AWSS3 class is instantiated, event listeners are added to it so that the UIController is informed whenever an AWSS3.LIST_OBJECTS, AWSS3.ERROR, or AWSS3.OBJECT_RETRIEVED event is fired so that it can update the user interface (UI) as necessary. After the event listeners are added, the call to the AWSS3.listObjects method is made, with the bucket name entered in the authentication window passed in as an argument. At this point, a lot happens behind the scenes: A URLStream is opened, a GET request is made, and Amazon S3 responds with an Extensible Markup Language (XML) data stream that contains information for each file in the bucket. The XML is then parsed into a collection of objects that are of type S3Object, which encapsulates the meta data for each file. After the XML stream has been parsed, the AWSS3Event.LIST_OBJECTS event is fired and the array of S3Object objects is passed through the generic data property of the AWSS3Event, which is typed simply as Object.

In the authenticate method, I added event listeners to the AWSS3 object—one of which was for the AWSS3Event.LIST_OBJECTS event—and assigned listObjectsHandler as the handler function for the event. This method captures the array of file objects that were created from the AWSS3Event.data property. It then iterates through the array, making the call to AWSS3.getObject for each file. The getObject method takes only two parameters: the bucket name and a key. Be careful about getting confused here, as the term key in this context is synonymous with what you generally refer to as a file path (for example, danorlando/documents/music/mysong.mp3).

Note: It is important to understand the differences in the way Amazon S3 refers to the elements contained in a bucket, and not to confuse these elements with the elements of a typical file system. Even though they use similar metaphorical icons (like folder and document icons), a bucket is actually a namespace, and an object can be either a file or a virtual directory. An object's key is synonymous with its concatenated path and file name. However, if it is a directory object, it has no file name, and its key would simply be referenced with no file name, as in path/to/folder/.

The code for the listObjectsHandler method is as follows:

 /**
* Grabs the array of file objects sent from S3 from the <code>event.data</code>
* parameter and iterates through the array. For each filename, or <code>key</code> in
* the array, a call is made to <code>AWSS3.getObject(bucket,key)</code>. Each time an
* object is successfully retrieved, the <code>AWSS3Event.OBJECT_RETRIEVED</code> event
* is fired, which then calls the <code>objectRetrievedHandler</code> function.
*
* @param event AWSS3Event
*
* @see com.adobe.webapis.awss3.AWSS3.getObject
* @see com.danorlando.controller.UIController.objectRetrievedHandler
*
*/
public function listObjectsHandler(event:AWSS3Event):void {
var objects:Array = event.data as Array;
for (var i:int=0; i < objects.length; i++) {
var filename:String = objects[i].key as String;
_awsAPI.getObject(this._bucket, filename);
}
}

The listObjectsHandler method initializes the loop symbolized in Figure 4 by the arrows that appear on both sides of the last three boxes. In the listObjects method, each S3Object created only gets values for S3Object.key, S3Object.lastModified, and S3Object.size, because the purpose of the listObjects method is only to return an XML stream that symbolizes the file structure of the specified bucket. In contrast, the getObject method provides values for the missing properties, including:

  • S3Object.bucket: The name of the bucket in which the object resides
  • S3Object.bytes: The file's ByteArray, obtained by calling the getDataFromStream method
  • S3Object.type: The file's meme type (for example, image/jpeg)

Each time the getObject method is called, the URLStream instance loads the data for the respective file object, then fires the AWSEvent.OBJECT_RETRIEVED event, with the respective S3Object assigned to the AWSS3Event.data property so that the event handler can pick it up.

Uploading Files to Amazon S3 Using POST

Making GET requests to Amazon S3 is rudimentary compared to POST. Using the HTTP POST request requires some additional work because of an additional layer of security. For the other HTTP requests, authentication is handled for you by the AWSS3 class, which encrypts the signature and places it in the request's header as the value for the authorization parameter. If you want to make POST requests to Amazon S3, not only do you need to send along the access ID and encrypted signature, but you also need to generate an encrypted "policy document" and send it along with the request. One thing is for sure: The AWS developers did a pretty good job of making sure that users could not upload files to a bucket that doesn't belong to them. Lucky for you, the com.danorlando.util.PolicyFactory is a reusable class that implements the Factory Method design pattern and takes care of the heavy lifting for you in terms of generating this policy document. Needless to say, figuring out this piece of the application, then writing the code to generate the policy exactly how Amazon S3 expects to see it was a bit daunting, so this article will probably save you a considerable amount of time.

The specific details of how the PolicyFactory generates the policy and signature for an upload are quite lengthy, so I won't get into them here. All you have to know is that there are two important methods in the class:

  • generatePolicy: Generates the Base64 policy document and returns it as a string
  • signPolicy: Combines the policy document and the Amazon S3 secret key to create an HMAC-SHA1–encrypted signature and returns it as a string

Rather than understanding the inner workings of the PolicyFactory, it is far more important to understand how it is implemented in the application. The PolicyFactory is put to use in the UIController.postFilesToBucket method, which is called when one or more files are dragged from the local file browser or desktop to the application. The code for the postFilesToBucket method is as follows:

		/**
* Instantiates the PolicyFactory class, and iterates the <code>ArrayCollection</code> of files that
* were dropped onto the browser. It uploads the files by first instantiating a FileReference object
* for each File in the array collection, then it creates a <code>S3PostOptions</code> objects to hold
* the necessary parameters for S3 to accept the upload. The respective policy and signature are generated
* by the <code>PolicyFactory</code> class, and finally the call to <code>S3PostRequest.upload</code> is made
* with the FileReference passed in as a parameter.
*
* @param files ArrayCollection
*
*/
public function postFilesToBucket(files:ArrayCollection):void {
var policyFactory:PolicyFactory = new PolicyFactory();
for each(var f:Object in files) {
var file:FileReference = f as FileReference;
var key:String = file.name;
var date:Date = new Date();
var dateString:String = _awsAPI.getDateString(date);
var options:S3PostOptions = new S3PostOptions();
options.acl = "authenticated-read";
options.contentType = "image/jpg";
options.policy = policyFactory.generatePolicy(_bucket,key,_accessId,secretKey);
options.signature = policyFactory.signPolicy(options.policy, this._secretKey);
var post:S3PostRequest = new S3PostRequest(this._accessId, this._bucket, key, options);
post.upload(file);
}
}

As explained in the code comments, the method instantiates the PolicyFactory and iterates the files that were dropped onto the application, creating a FileReference for each file. It then instantiates a new S3PostOptions object to store the parameters for the request. These parameters include: acl, which is a string that identifies the user permissions that should be assigned to the uploaded file, as well as the file's meme type, which is assigned to the S3PostOptions.contentType property. The next part is the most important, where the generatePolicy and signPolicy methods of the Policyfactory class are called. The String values returned from these methods are assigned to the S3PostOptions.policy and S3PostOptions.signature properties, respectively. Finally, a new S3PostRequest object is instantiated, and the S3PostRequest.upload method is called.

Conclusion

Overall, integrating Flex and AIR with Amazon S3 involves patience and determination. However, it is undoubtedly worth it, and this article should provide an excellent starting point.

Related Resources

About the Author

Dan Orlando is a published author on rich application development in the enterprise and has been featured in such magazines and web sites as PHP Architect, IBM developerWorks, and the Adobe Developer Connection.

©2014, Amazon Web Services, Inc. or its affiliates. All rights reserved.