Tag: AWS CLI


How to Use Batch References in Amazon Cloud Directory to Refer to New Objects in a Batch Request

In Amazon Cloud Directory, it’s often necessary to add new objects or add relationships between new objects and existing objects to reflect changes in a real-world hierarchy. With Cloud Directory, you can make these changes efficiently by using batch references within batch operations.

Let’s say I want to take an existing child object in a hierarchy, detach it from its parent, and reattach it to another part of the hierarchy. A simple way to do this would be to make a call to get the object’s unique identifier, another call to detach the object from its parent using the unique identifier, and a third call to attach it to a new parent. However, if I use batch references within a batch write operation, I can perform all three of these actions in the same request, greatly simplifying my code and reducing the round trips required to make such changes.

In this post, I demonstrate how to use batch references in a single write request to simplify adding and restructuring a Cloud Directory hierarchy. I have used the AWS SDK for Java for all the sample code in this post, but you can use other language SDKs or the AWS CLI in a similar way.

Using batch references

In my previous post, I demonstrated how to add AnyCompany’s North American warehouses to a global network of warehouses. As time passes and demand grows, AnyCompany launches multiple warehouses in North American cities to fulfill customer orders with continued efficiency. This requires the company to restructure the network to group warehouses in the same region so that the company can apply similar standards to them, such as delivery times, delivery areas, and types of products sold. (more…)

Write and Read Multiple Objects in Amazon Cloud Directory by Using Batch Operations

Amazon Cloud Directory is a hierarchical data store that enables you to build flexible, cloud-native directories for organizing hierarchies of data along multiple dimensions. For example, you can create an organizational structure that you can navigate through multiple hierarchies for reporting structure, location, and cost center.

In this blog post, I demonstrate how you can use Cloud Directory APIs to write and read multiple objects by using batch operations. With batch write operations, you can execute a sequence of operations atomically—meaning that all of the write operations must occur, or none of them do. You also can make your application efficient by reducing the number of required round trips to read and write objects to your directory. I have used the AWS SDK for Java for all the sample code in this blog post, but you can use other language SDKs or the AWS CLI in a similar way.

Using batch write operations

To demonstrate batch write operations, let’s say that AnyCompany’s warehouses are organized to determine the fastest methods to ship orders to its customers. In North America, AnyCompany plans to open new warehouses regularly so that the company can keep up with customer demand while continuing to meet the delivery times to which they are committed.

The following diagram shows part of AnyCompany’s global network, including Asian and European warehouse networks. (more…)

How to Facilitate Data Analysis and Fulfill Security Requirements by Using Centralized Flow Log Data

I am an AWS Professional Services consultant, which has me working directly with AWS customers on a daily basis. One of my customers recently asked me to provide a solution to help them fulfill their security requirements by having the flow log data from VPC Flow Logs sent to a central AWS account. This is a common requirement in some companies so that logs can be available for in-depth analysis. In addition, my customers regularly request a simple, scalable, and serverless solution that doesn’t require them to create and maintain custom code.

In this blog post, I demonstrate how to configure your AWS accounts to send flow log data from VPC Flow Logs to an Amazon S3 bucket located in a central AWS account by using only fully managed AWS services. The benefit of using fully managed services is that you can lower or even completely eliminate operational costs because AWS manages the resources and scales the resources automatically.

Solution overview

The solution in this post uses VPC Flow Logs, which is configured in a source account to send flow logs to an Amazon CloudWatch Logs log group. To receive the logs from multiple accounts, this solution uses a CloudWatch Logs destination in the central account. Finally, the solution utilizes fully managed Amazon Kinesis Firehose, which delivers streaming data to scalable and durable S3 object storage automatically without the need to write custom applications or manage resources. When the logs are processed and stored in an S3 bucket, these can be tiered into a lower cost, long-term storage solution (such as Amazon Glacier) automatically to help meet any company-specific or industry-specific requirements for data retention. (more…)

The Resource Groups Tagging API Makes It Easier to List Your Resources by Using a New Pagination Parameter

Today, the Resource Groups Tagging API introduced a pagination parameter to the GetResources action that makes it easier for you to manage lists of resources returned by your queries. Using this parameter, you can list your resources that are associated with specific tags or resource types, and limit result sets to a specific number per page. Previously, you could list resources only by the number of tags.

Let’s say you want to query your resources that have tags with the key of “stage” and the value of “production”. You want to return as many as 25 resources per page of results. The following Java code example meets those criteria.

TagFilter tagFilter = new TagFilter();
tagFilter.setKey("stage");
tagFilter.setValues(Arrays.asList(new String[] { "production" }));

List<TagFilter> tagFilters = new ArrayList<>();
tagFilters.add(tagFilter);

AWSResourceGroupsTaggingAPIClient client = new AWSResourceGroupsTaggingAPIClient();
GetResourcesRequest request = new GetResourcesRequest();
request.withResourcesPerPage(25).withTagFilters(tagFilters);
GetResourcesResult result = client.getResources(request);

Also, with the updated AWS CLI, the GetResources action by default returns all items that meet your query criteria.  If you want to use pagination, the AWS CLI continues to support the case in which you receive a subset of items returned from a query and a pagination token for looping through the remaining items.

For example, the following AWS CLI script uses automatic pagination to return all resources that meet the query criteria.

aws resourcegroupstaggingapi get-resources

However, if you want to return resources in groups of 25, the following AWS CLI script example uses custom pagination and returns as many as 25 resources per page that meet the query criteria.

aws resourcegroupstaggingapi get-resources –-resources-per-page 25

If you have comments about this post, submit them in the “Comments” section below. Start a new thread on the Resource Groups Tagging API forum if you have questions about or issues using the new functionality.

– Nitin

Now Create and Manage Users More Easily with the AWS IAM Console

Today, we updated the AWS Identity and Access Management (IAM) console to make it easier for you to create and manage your IAM users. These improvements include an updated user creation workflow and new ways to assign and manage permissions. The new user workflow guides you through the process of setting user details, including enabling programmatic access (via access key) and console access (via password). In addition, you can assign permissions by adding users to a group, copying permissions from an existing user, and attaching policies directly to users. We have also updated the tools to view details and manage permissions for existing users. Finally, we’ve added 10 new AWS managed policies for job functions that you can use when assigning permissions.

In this post, I show how to use the updated user creation workflow and introduce changes to the user details pages. If you want to learn more about the new AWS managed policies for job functions, see How to Assign Permissions Using New AWS Managed Policies for Job Functions.

(more…)

How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker

Docker enables you to package, ship, and run applications as containers. This approach provides a comprehensive abstraction layer that allows developers to “containerize” or “package” any application and have it run on any infrastructure. Docker containers are analogous to shipping containers in that they provide a standard and consistent way of shipping almost anything.

One of the challenges when deploying production applications using Docker containers is deciding how to handle run-time configuration and secrets. Secrets are anything to which you want to tightly control access, such as API keys, passwords, and certificates. Injecting secrets into containers via environment variables in the Docker run command or Amazon EC2 Container Service (ECS) task definition are the most common methods of secret injection. However, those methods may not provide the desired level of security because environment variables can be shared with any linked container, read by any process running on the same Amazon EC2 instance, and preserved in intermediate layers of an image and visible via the Docker inspect command or ECS API call. You could also bake secrets into the container image, but someone could still access the secrets via the Docker build cache.

In this blog post, I will show you how to store secrets on Amazon S3, and use AWS Identity and Access Management (IAM) roles to grant access to those stored secrets using an example WordPress application deployed as a Docker image using ECS. Using IAM roles means that developers and operations staff do not have the credentials to access secrets. Only the application and staff who are responsible for managing the secrets can access them. The deployment model for ECS ensures that tasks are run on dedicated EC2 instances for the same AWS account and are not shared between customers, which gives sufficient isolation between different container environments. (more…)

How to Use the REST API to Encrypt S3 Objects by Using AWS KMS

AWS Key Management Service (AWS KMS) allows you to use keys under your control to encrypt data at rest stored in Amazon S3. The two primary methods for implementing this encryption are server-side encryption (SSE) and client-side encryption (CSE). Each method offers multiple interfaces and API options to choose from. In this blog post, I will show you how to use the AWS REST APIs to secure S3 objects using KMS keys via SSE. Using the native REST APIs vs. the AWS SDKs may be a better option for you if you are looking to develop cross-platform code or want more control over API usage within your applications. (more…)

How to Configure Your EC2 Instances to Automatically Join a Microsoft Active Directory Domain

Seamlessly joining Windows EC2 instances in AWS to a Microsoft Active Directory domain is a common scenario, especially for enterprises building a hybrid cloud architecture. With AWS Directory Service, you can target an Active Directory domain managed on-premises or within AWS. How to Connect Your On-Premises Active Directory to AWS Using AD Connector takes you through the process of implementing that scenario.

In this blog post, I will first show you how to get the Amazon EC2 launch wizard to pick up your custom domain-join configuration by default—including an organizational unit—when launching new Windows instances. I also will show you how to enable an EC2 Auto Scaling group to automatically join newly launched instances to a target domain. The Amazon EC2 Simple Systems Manager (SSM) plays a central role in enabling both scenarios. (more…)

AWS Directory Service Now Supports API Access and Logging Via AWS CloudTrail

Developers can now programmatically create and configure Simple AD and AD Connector directories in AWS Directory Service via the AWS SDKs or CLI. You can also now use Cloud Trail to log API actions performed via an SDK, the CLI, or AWS Directory Service console. Permissions for performing these actions can be controlled via an AWS IAM policy, and the APIs can be used in all AWS regions in which Directory Service is available.

To get started, download the latest AWS SDK or CLI, or learn more about the new APIs in our developer guide.

If you have questions, please post them on the Directory Service forum.

– Rob

An Easier Way to Determine the Presence of AWS Account Access Keys

Last month, the AWS Security Blog encouraged you to adhere to AWS Identity and Access Management (IAM) best practices. One of these best practices is to lock away your AWS account (root) access keys and password, and not use them for day-to-day interaction with AWS. In fact, when it comes to your root account access keys, we recommend that you delete them. How, though, do you determine if your root account even has access keys?

In the past, the easiest way to determine if your root account had access keys was to sign in to the AWS Management Console using your root account password, which is something we recommend against doing regularly. Instead, you should enable multi-factor authentication (MFA) on your root account, and then you should only sign in with your root account when absolutely necessary. Furthermore, if you wanted to determine programmatically whether your root account had an access key, the only way to do it was to use the root account access key, which is an obvious dilemma. Now, though, it’s possible by using the AWS Command Line Interface (CLI) or the AWS SDKs to use the credentials of an IAM user to determine whether your root account has access keys. For the rest of this post, I’ll show you how you can use the AWS CLI to check if your root account has access keys.  (more…)