Get Started with AWS for Free

Create a Free Account
Or Sign In to the Console

AWS Free Tier includes 10,000 activity tasks, 30,000 workflow-days, and 1,000 initiated executions with Amazon Simple Workflow (SWF).

View AWS Free Tier Details »

Amazon Simple Workflow (Amazon SWF) is a task coordination and state management service for cloud applications. With Amazon SWF, you can stop writing complex glue-code and state machinery and invest more in the business logic that makes your applications unique.

Our APIs, ease-of-use libraries, and control engine give developers the tools to coordinate, audit, and scale applications across multiple machines – in the AWS Cloud and other data centers. Whether automating business processes for finance applications, building big-data systems, or managing cloud infrastructure services, Amazon SWF helps you develop applications with processing steps that are resilient to failure – steps that can be scaled independent of each other and be audited even when they touch many different systems.

Using Amazon SWF, you structure the various processing steps in an application that runs across one or more machines as a set of “tasks.” Amazon SWF manages dependencies between the tasks, schedules the tasks for execution, and runs any logic that needs to be executed in parallel. The service also stores the tasks, reliably dispatches them to application components, tracks their progress, and keeps their latest state.

As your business requirements change, Amazon SWF makes it easy to change application logic without having to worry about the underlying state machinery, task dispatch, and flow control, and like other AWS Services, you only pay for what you use.

Amazon SWF replaces the complexity of custom-coded workflow solutions and process automation software with a fully managed web service. This eliminates the need for developers to manage the infrastructure plumbing of process automation so they can focus their energy on the unique functionality of their application.

Amazon SWF seamlessly scales with your application’s usage. No manual administration of the workflow service is required as you add more workflows to your application or increase the complexity of your workflows.

Amazon SWF lets you write your application components and coordination logic in any programming language and run them in the cloud or on-premises.

Video encoding using Amazon S3 and Amazon EC2. In this use case, large videos are uploaded to Amazon S3 in chunks. The upload of chunks has to be monitored. After a chunk is uploaded, it is encoded by downloading it to an Amazon EC2 instance. The encoded chunk is stored to another Amazon S3 location. After all of the chunks have been encoded in this manner, they are combined into a complete encoded file which is stored back in its entirety to Amazon S3. Failures could occur during this process due to one or more chunks encountering encoding errors. Such failures need to be detected and handled.

Migrating components from the datacenter to the cloud. Business critical operations are hosted in a private datacenter but need to be moved entirely to the cloud without causing disruptions. With Amazon SWF: Amazon SWF-based applications can combine workers that wrap components running in the datacenter with workers that run in the cloud. To transition a datacenter worker seamlessly, new workers of the same type are first deployed in the cloud. The workers in the datacenter continue to run as usual, along with the new cloud-based workers. The cloud-based workers are tested and validated by routing a portion of the load through them. During this testing, the application is not disrupted because the workers in the datacenter continue to run. After successful testing, the workers in the datacenter are gradually stopped and those in the cloud are scaled up, so that the workers are eventually run entirely in the cloud. This process can be repeated for all other workers in the datacenter so that the application moves entirely to the cloud. If for some business reason, certain processing steps must continue to be performed in the private data center, those workers can continue to run in the private data center and still participate in the application.

Processing large product catalogs using Amazon Mechanical Turk. While validating data in large catalogs, the products in the catalog are processed in batches. Different batches can be processed concurrently. For each batch, the product data is extracted from servers in the datacenter and transformed into CSV (Comma Separated Values) files required by Amazon Mechanical Turk’s Requester User Interface (RUI). The CSV is uploaded to populate and run the HITs (Human Intelligence Tasks). When HITs complete, the resulting CSV file is reverse transformed to get the data back into the original format. The results are then assessed and Amazon Mechanical Turk workers are paid for acceptable results. Failures are weeded out and reprocessed, while the acceptable HIT results are used to update the catalog. As batches are processed, the system needs to track the quality of the Amazon Mechanical Turk workers and adjust the payments accordingly. Failed HITs are re-batched and sent through the pipeline again.