AWS Marketplace
Deliver High-Quality Software Faster with CloudEQ’s DevOps Pipeline Automation and the AWS Well-Architected Framework
Organizations using manual or partially automated infrastructure often experience deployment delays that impact time-to-market. This can affect their ability to maintain consistent security and compliance processes. Organizations need both agility and governance to innovate on AWS. A multi-account strategy helps improve resource isolation, security, and compliance while helping organizations meet regulatory requirements and track costs.
CloudEQ, an AWS Partner in AWS Marketplace, addresses these challenges by integrating AWS landing zone with automated DevOps pipelines. The Automated DevOps Pipeline solution in AWS Marketplace uses AWS Well-Architected Framework practices to deploy secure, compliant AWS environments by combining a multi-account landing zone with infrastructure as code using Terraform and CI/CD pipelines using GitHub Actions.
The AWS Well-Architected Framework helps organizations build solutions that deliver across security, performance, operations, and cost optimization. These solutions include automated monitoring and proactive issue detection to reduce operational overhead. By implementing Well-Architected solutions, organizations can focus on innovation while working with adaptable cloud infrastructure.This post explains how to accelerate software delivery and improve governance using CloudEQ’s DevOps Pipeline Automation solution.
Solution overview
When developers commit infrastructure code to GitHub, an automated CI/CD workflow is triggered. The pipeline runs a Bridgecrew security scan to identify misconfigurations and executes a Terraform plan to preview changes. After manual approval, the pipeline runs Terraform apply to provision AWS infrastructure. Terraform state files are stored in Amazon S3 with state locking for consistency and collaboration.

Figure 1: Architecture diagram
Implementation Steps
- Develop and commit infrastructure code
Developers define infrastructure as code (IaC) using Terraform and commit it to a GitHub repository. This enables version control and collaboration while maintaining consistency across environments.
- Trigger the automated CI/CD pipeline
When a change is pushed, a GitHub Actions workflow is triggered. The pipeline connects securely to AWS using OpenID Connect (OIDC), eliminating the need for long-lived credentials.
- Run security and compliance checks
The pipeline performs an automated Bridgecrew (Checkov) scan to validate Terraform code against security and compliance best practices before deployment.
- Generate and review the Terraform plan
The workflow runs Terraform plan to preview infrastructure changes. This step helps teams understand what resources will be created or modified.
- Manual approval for deployment
A manual approval gate ensures that changes are reviewed before applying them, adding governance control to the automation process.
- Provision infrastructure on AWS
Once approved, the pipeline executes terraform apply to deploy the infrastructure. Terraform state files are securely stored in an Amazon S3 bucket, ensuring reliable state management and collaboration.
Prepare your AWS environment
To begin, make sure you have your AWS multi-account structure and tooling ready:
- Log in to your AWS management account as a user who has admin access and verify that AWS Organizations is enabled.
- Create an S3 bucket for Terraform remote state files and an Amazon DynamoDB table for state locking to prevent concurrent writes.
- On your local machine (or CI runner), install Terraform (version 1.3 or later), the AWS CLI, Git, and GitHub Actions.
- Write your Terraform configuration for the landing zone. This will include defining your organization (using the Terraform AWS provider to create OUs and accounts), baseline infrastructure (like an S3 log archive bucket, AWS CloudTrail for auditing, and AWS Config rules), and other foundational resources.
- Create an AWS Identity and Access Management (IAM) role (for example, GitHubActionsDeploymentRole) with a trust policy that allows the GitHub Actions OIDC provider to assume it.
a. Attach a policy to this role granting necessary permissions (AWS Organizations, IAM, Amazon S3, and so on, limited to your deployment scope).
b. Note the role Amazon Resource Name (ARN). - In your GitHub repository settings, configure an OIDC trust (AWS_PROVIDER with the role ARN) [PL1] or add repository secrets for AWS roles if needed. This will let your GitHub Actions workflow authenticate to AWS securely
Configure the CI/CD pipeline
Next, set up the GitHub Actions workflow that will deploy the landing zone infrastructure:
- In your repository, add a workflow YAML file. You can use the following file for example:https://github.com/ollionorg/aws-landing-zone/blob/main/.github/workflows/workflows.yaml
- Add a step in the workflow to run a security scan on the Terraform code before deployment. For instance, use the Bridgecrew Checkov GitHub Action to scan the repository:
name: Checkov IaC Security Scan
uses: bridgecrewio/checkov-action@v12 - After a successful scan, include a step to run terraform init and terraform plan:
name: Terraform Plan
uses: bridgecrewio/checkov-action@ - Implement a manual approval gate. In GitHub Actions, one approach is to use environment protection rules so that the deploy job waits for approval in the GitHub UI.
- Add a Terraform apply stage. After the job is approved, the pipeline should perform Terraform apply to execute the changes and create or update the AWS infrastructure.
Deploy and validate the landing zone
Complete the following steps to deploy and validate the landing zone:
- To start the pipeline, commit and push your changes to the main branch of the repo. It will first perform the Checkov scan, then run the Terraform plan. Make sure the plan outputs the expected creation of OUs, accounts, and other resources.
- Review the security scan results. If the Checkov scan found any critical security issues (for example, if your Terraform inadvertently tried to create an S3 bucket without encryption), address those findings.
- When the plan is ready and no blockers are present, proceed with the manual approval.
- After approval, the Terraform apply will run and provision the infrastructure.
- Sign in to the AWS Management Console and check that everything is set up correctly:
- In AWS Organizations, you should see the new OUs and member accounts created.
- Check that baseline resources like the log archive S3 bucket, AWS Config recorder, and AWS CloudTrail logs are present in the appropriate accounts.
Deploy workloads with a DevOps pipelines (optional)
With the landing zone in place, you can use similar pipelines to deploy and manage workloads in your member accounts. This section provides an example of how to deploy an Amazon Elastic Kubernetes Service (Amazon EKS) cluster using the pipeline module provided:
- Add the Amazon EKS module configuration.
- In your Terraform repository (it could be a separate repo or a new directory for workloads), write the Terraform code for the EKS cluster. You can use CloudEQ’s Amazon EKS module, which includes best practice configurations.
- Add a new GitHub Actions workflow to your repository. This workflow will be similar to the landing zone pipeline. It will assume an IAM role through OIDC, then apply to create the EKS cluster.
- Commit and push the Amazon EKS module code and workflow. On approval, Terraform will create the EKS cluster and any add-ons defined.

Figure 2: Terraform Plan Output

Figure 3: Trend Micro vulnerabilities on Terraform Plan
- Verify the new EKS cluster. You can fetch the kubeconfig for the cluster (for example, if output by Terraform) and confirm you can connect.
- You can go to the Trend Micro console and run your modules with some unique tags to get checks on vulnerabilities, as shown in the following screenshot.

Figure 4: Trend Micro vulnerabilities on the Trend-Micro dashboard
Conclusion
CloudEQ’s DevOps pipeline automation with the AWS Well-Architected Framework helps organizations scale on AWS while maintaining governance. This solution can reduce deployment times and includes automated checks to support compliance requirements. The AWS Validated DevOps Pipeline Automation helps organizations align their applications with AWS Well-Architected practices.
Get started with CloudEQ DevOps Pipeline Automation in AWS Marketplace.
To learn more about the solution, contact CloudEQ through the Request private offer option in AWS Marketplace. Our team will discuss your requirements and guide you through implementation.
About the authors

Ryan Dsouza
Ryan Dsouza is a principal solutions architect in the Cloud Optimization organization at Amazon Web Services (AWS). Based in New York City, Ryan helps customers design, develop, and operate more secure, scalable, and innovative solutions using the breadth and depth of AWS capabilities to deliver measurable business outcomes. He is actively engaged in developing strategies, guidance, and tools to help customers architect solutions that optimize for performance, cost-efficiency, security, resilience, and operational excellence, adhering to the AWS Cloud Adoption Framework and AWS Well-Architected Framework

Priyanka Sanjeev
Priyanka Sanjeev is a technical program manager in the Cloud Optimization organization at Amazon Web Services (AWS). Based in Seattle, Priyanka spearheaded from concept to deliver the Well-Architected Validated Solutions initiative, in which mechanisms such as automated reviews and remediations and enablement of the Well-Architected Framework were integrated into the solution build and delivery lifecycle. Solutions built following these principles stay Well-Architected through the lifecycle of the workload

Kevin Mead
Kevin Mead is CloudEQ’s growth architect, with 20 years of experience crafting strategic solutions for Fortune 500 companies. He’s the visionary who identifies opportunities and turns them into business gold. As VP of Business Development, Kevin ensures that CloudEQ’s innovative cloud solutions are tailored to meet each client’s unique needs, driving transformative change and ensuring long-term partnerships. Kevin’s leadership is built on one simple principle: delivering unprecedented value to our clients and partners alike