AWS Cloud Operations & Migrations Blog
How to optimize assessment of cloud services
As my colleague Ilya Epshteyn introduced in his blog titled “How financial institutions can approve AWS services for highly confidential data,” common across the financial services industry is a formal assessment process for cloud services. These assessment processes vary in depth and breadth, striving to determine which cloud services will be best suited to fulfill business requirements, while satisfying industry expectations and sound technology risk management. This blog offers simple guidance to help you create a new, or optimize your existing assessment process for cloud services.
I frequently meet with customers to discuss their governance and cloud assessment processes, and during those conversations I hear common themes. First, while the process is formal, it is often unowned, resulting in teams following a process without necessarily understanding the business outcome the process is designed to achieve. Without strong ownership, the participants and assessment scope are inconsistent. At times, relying on personal expertise and individual best efforts, rather than a structured framework to allow for differentiation by functionality. Finally, almost without exception, customers feel there is meaningful opportunity to at once enhance the quality of the assessment, while increasing knowledge sharing, to iterate and build upon learnings.
Why is a formalized cloud service assessment process so important?
Financial services firms share a common regulatory obligation to evidence oversight of technology risk. Traditionally, the enterprise risk framework comprised of siloed “three lines of defense” or (3LoD). The first line of business/operations as the risk takers, executing control; second line consisted of risk guardians, monitoring risks and assessing controls; and the third line are the independent, internal auditors or risk assurance. These 3LoD were responsible for technology risks and typically collected internal policies, documented procedures within their own teams with an array of assessments and audits, issued by and executed across the other lines of defense.
Embedding cloud assessment processes into this existing enterprise risk framework enables the organization to appropriately evidence how key technology decisions are made. Also, how risks are assessed and mitigated, and how the strength of the control environment aligns to risk appetite, while offering an avenue to focus on the nuances of cloud-based services.
Tips for optimizing your cloud assessment
With the expectations of financial services customers in mind, I offer three actions customers can take to build or improve upon their cloud assessment process:
- Formalize Governance Structure. If not already formalized, the first step financial services institutions should take is to appoint a C-Level executive to take end-to-end accountability for cloud governance and control.
- Prioritize Platform Controls. In shaping your cloud assessment, introduce the distinction between cloud platform and business application functionality in the prioritization and requirements. Emphasizing platform-level controls for security and resiliency as the initial priority When focus shifts to business application functionality, you will be in the position to tailor assessments based on the controls inherited from the cloud platform.
- Build-In Continuous Improvement. Knowledge sharing and continuous improvement must be a stated priority from day 1. The expectation for proactive transparency builds trust across all three lines of defense as controls are developed and assessed. Conscious and proactive sharing provides confidences that the controls were designed and are performing effectively for the initial production use case. As usage and expertise with AWS increases, it also facilitates continuous improvement in control strength and coverage.
Formalize Governance Structure
The identification of the appropriate C-Level executive to have full accountability for cloud governance is an important first step. From the outset, this individual sets the tone for cloud governance and control – responsible for creating the structure and process for cloud assessment, usage, and ongoing monitoring. The key is to appoint a strong, well-positioned leader, who is incented to establish a well-controlled-yet-agile environment, leveraging the expertise from across the organization.
Once assigned, that governance lead should formalize the cross-functional structure that shapes the assessment process, codify policies and required governing processes, and support ongoing assessments. In my experience, a virtual team, where diverse expertise can be leveraged, supported by a formal governance framework is most effective.
Considerations for Effective Cloud Governance
- An enterprise-wide cloud strategy that outlines objectives for effective cloud governance and acknowledges expertise is built over time, with measured adoption and usage.
- An assigned, engaged, and committed executive accountable for cloud governance is integrated into the overall governance structure, for ongoing oversight.
- Knowledgeable and engaged risk and control stakeholders (across the three lines of defense) as formal participants in cloud governance activities.
- Identification of a cloud enablement team, with formal alignment to enterprise governance frameworks and processes.
- Defined process communicated to the organization, with automated enforcement to ensure only authorized cloud services are used.
Prioritize Platform Controls
A common pattern that I have observed in working with customers is the creation of a one-size-fits-all approach to assessing cloud services. This typically takes the form of each service evaluated individually, often with a detailed checklist that is methodically completed.
Why is this less than ideal? First, it assumes that the threats for each service are equivalent (and therefore, the same assessment is an appropriate way to determine required controls). Second, this type of approach does not allow the assessor to distinguish by capability or functionality (which would allow, for example, differentiation between data-centric versus compute services). Most importantly, it fails to account for the existing control foundation (and therefore, may overstate the need for additional controls).
What I have seen work most effectively is a graduated control framework that establishes a non-negotiable foundation, and then adds required controls based on other factors, including environment, data sensitivity, and business criticality. This differentiation enables experimentation, without introducing inappropriate levels of risk. Specifically, while some controls must be preventative from the start in all environments for all data types, it may be acceptable for others to start as detective, with supporting monitoring. Well-controlled innovation is the goal.
What constitutes that non-negotiable control foundation?
At the highest level, you want to have confidence that: 1) your internal cloud resources are not publicly accessible. 2) that your users have access only to the functionality appropriate for their role. 3) your cloud environment is resilient. 4) that you have available monitoring suite to identify and address anomalies. With confidence in this foundation, comfort for experimentation would increase.
A well-controlled cloud platform emphasizes:
- Identified suite of control services, such as AWS Organizations, AWS Identity and Access Management (IAM), and AWS Firewall Manager that will serve as the control foundation for your cloud environment.
- AWS Service Catalog to enable self-service, while enforcing required controls.
- Environment and activity monitoring controls with services such as AWS Config, AWS CloudTrail, Amazon GuardDuty, and AWS Security Hub. Clear accountability for acting upon identified anomalies, with the option for automating corrective action using AWS Lambda and AWS Systems Manager Automation documents.
With a strong foundation, assessing individual services can be streamlined
Building upon the foundational cloud platform, the assessment of the individual cloud services can be personalized to the service and the anticipated usage.
Reinforcing and expanding upon the control considerations from Ilya’s blog, as assessments are performed for individual services, you should confirm that: 1) your sensitive data can be encrypted, preferably using your customer master key (AWS Key Management Service with a Customer Master Key). 2) “public” access can be restricted (most often achieved leveraging VPC endpoints). 3) resiliency and recoverability capabilities are robust.4) desired depth and breadth of configuration monitoring (typically achieved by the service integration with AWS Config).
That basic confirmation helps to inform whether the cloud platform provides adequate control to allow for experimentation. This also allows the organization to gain valuable experience, while preventative access policies are defined. Also, configurations are set to enforce encryption, monitoring services updated to include new services, and corrective automation engineered to address anomalies.
This parallel approach allows collaborative engagement to capture common use cases, define architectural and configuration patterns to integrate into pipelines, and understand business use cases to validate control coverage, long before first production use.
Build-In Continuous Improvement
The final and most important element, is to embed continuous improvement in all aspects of cloud governance. Frequent collaboration, critical assessment of controls, and swift root cause analysis for any incidents and anomalies promote trust. Trust, in turn encourages agility in governance, development, and operations.
Frequent cross-functional collaboration encourages:
- Shifting left, with integration of controls into the development pipeline. This enables engineers to address potential control issues during development, and helps ensure completion of security and compliance validation, before production deployment.
- Increased automation in production operation, with enhanced monitoring capabilities and the opportunity for more frequent resiliency and disaster recovery testing. This offers confidence in the ability to withstand unanticipated production issues.
- Active engagement with internal risk and audit functions, for proactive review and assessment of the control environment, both design of the controls and the operating effectiveness.
- Leverage of AWS Well-Architected Framework to support ongoing validation the environment remains secure, resilient, performant, and cost optimized.
With the actions outlined above, I am confident you will be able to create an environment of collaboration across all three lines of defense, and embrace the culture of agility to capitalize on the disruptive promise of cloud computing.
My recommendation is to start with the final tip and formalize communications for lessons learned and information sharing. Starting with an open and honest current state assessment will build trust across the organization, and set the important foundation for continuous improvement.
I would love to know whether these tips were helpful in optimizing your cloud assessment approach, increasing engagement across risk and audit functions, and in turn, enhancing overall cloud governance.
Here is to staying well governed!
About the author
Jennifer is a principal technical program manager with AWS. In her role, she supports financial services organizations manage and govern their most sensitive workloads. She loves to cook and travel, always seeking new experiences in the kitchen and the world.