AWS for Industries

GxP Data Integrity when using AWS Services

Data is at the heart of any modern pharmaceutical company throughout every stage of the value chain. Understandably, data is also central to industry regulations and continues to be one of the main topics during FDA inspections. However, questions still arise around how to meet data integrity requirements when operating in the cloud.

Regulatory requirements to maintain the integrity of GxP data are typically implemented as part of a validated application. However, by implementing controls at the AWS service level, we can facilitate data integrity even f0r actions performed outside the validated application.

Let’s consider a use case where an application generates objects which contain GxP data. This data could be in any format (JSON, CSV, or PDF). Imagine a piece of lab equipment generates a CSV data file and stores it in Amazon Simple Storage Service (Amazon S3). The lab equipment software is responsible for the integrity of the contents of the file. However, users may have direct access to the objects to download, modify, and upload. So how can we configure Amazon S3 to help maintain data integrity?

We will discuss data integrity requirements and describe an approach to define controls to be applied to AWS services. These will create a qualifiable ‘building block’.

Regulatory Background

Data integrity is central to life sciences industry regulations and guidance for organizations across the globe, including:

One of the common themes throughout is ALCOA+, a set of principles for ensuring data integrity. ALCOA started as Attributable, Legible, Contemporaneously recorded, Original or a true copy, and Accurate but were then extended to also include Complete, Consistent, Enduring, Available and finally Traceable. These principles guide the configuration of AWS services.

Layered approach to Service Configuration and Controls

Our overarching control objective is to maintain GxP data integrity.

To achieve that objective requires the definition of a collection of controls over the configuration and operation of AWS services. Enabling data integrity must be based on security best practices. Therefore, we will separate configuration for security from the additional configuration for data integrity.

Figure 1 - Layered service configuration and controls

Figure 1 – Layered service configuration and controls

For example, an Amazon S3 bucket should be configured to encrypt data in transit and at rest. Amazon S3 managed encryption does protect the data. However, your own security standards may require the use of AWS Key Management Service (AWS KMS). AWS KMS lets you create, manage, and control cryptographic keys across your applications and more than 100 AWS services.

Other regulations may push the configuration further. For example, if the GxP data also contains personally identifiable information (PII), then the General Data Protection Regulations (GDPR) or Clinical Trial Regulations come into play. Therefore, the service configuration should also be flexible enough to allow additional configuration where needed, while still meeting the control objectives.

Layered on top of a secure bucket will be additional configurations to meet data integrity requirements. We can take each of the ALCOA+ principles as a source of requirements. Those data integrity requirements can be framed in the form of controls guiding the configuration and operation of AWS Services.

This forms the basis of a pre-configured ‘building block’, which is a configuration of the base AWS service.

Do we need to qualify base AWS Services for GxP workloads?

This is a common question. Qualification is the process of demonstrating the ability of components to fulfill specified requirements. Basically, you define your requirements and then verify that the component satisfies those requirements.

This is where you might be thinking, doesn’t AWS already do this?

Proposed regulatory changes have encouraged regulated companies to apply critical thinking to this. AWS has requirements for each service and thoroughly tests the service using numerous configurations. These tests occur after every change as part of a controlled change and release management process to facilitate that the service works as designed and there has been no regression.

According to our definition of qualification, AWS does indeed qualify its services and you shouldn’t need to repeat this effort.

Now the question is how can this compliance be demonstrated to a regulator?

It is important to look at compliance through risk management lens. An International Society for Pharmaceutical Engineering (ISPE) magazine article states that the SOC 2 reports (along with additional documentation about a suppliers Quality Management System) provide sufficient evaluation of supplier processes when procuring Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) services.

The SOC 2 reports may be a good focal point, but ISPE GAMP 5 Guide 2nd Edition also mentions leveraging a range of certifications and attestations. These are made available to customers for download through AWS Artifact and are created by AWS’ numerous compliance programs (ISO 9001, ISO 27001, ISO 27017, HITRUST, NIST, and others).

Therefore, to be confident that a service has been qualified, and remains in a qualified state, that service must be included in the compliance programs mentioned above. Services that are covered by those compliance programs can be considered eligible for use as part of GxP workloads.

Should your local regulatory agency want to see more than certifications and reports, ISPE GAMP 5 also states “Postal audits may be appropriate for suppliers of standard and configurable products and services”. As such a supplier, AWS will respond to postal audits―contact your account team.

Qualified Building Blocks

It’s important to understand that although AWS tests the standard functionality of a service, it cannot test the suitability of the service to meet your specific requirements and intended use. Therefore, you shouldn’t need to repeat the testing AWS does, but you still need to test your configuration of the service.

You will need to perform a formal qualification of your configuration.

Once you have  approved a service for use in general, it should go through a formal qualification process before being made available to your developers of GxP workloads. Controlling which services are made available to developers is done through a Service Control Policy (SCP). To learn more about this, read the blog Approving AWS services for GxP workloads.

Your qualification process will include performing a risk assessment and then creating a configuration to meet your specific security and data integrity requirements.

How might this work in practice?

Let’s say you wanted to use Amazon S3 to store GxP data coming from your lab equipment. Amazon S3 is covered by numerous AWS compliance programs so the base service is qualified and maintained under a state of control. As AWS has qualified the base service there’s no need to re-test the standard functionality of Amazon S3, like the ability to create a bucket and store objects in it. You may initially perform a risk assessment, which results in a set of controls so the bucket must always be encrypted. You would then configure your Amazon S3 bucket to satisfy those controls and data integrity requirements. You then test that your configuration of Amazon S3 satisfies your requirements, and capture test evidence to prove it.

There are a couple of possible approaches. Each of your development teams could take the same data integrity requirements, configure their own Amazon S3 buckets and run tests to demonstrate the requirements have been met. Basically, include this testing in each of the applications qualification and validation efforts. However, if there are many GxP applications using Amazon S3 there is a risk of inconsistent implementation and duplication of effort.

The alternative is to encapsulate a standard configuration in an approved ‘building block’, qualify it once, and then make it available to your development teams so they can create compliant Amazon S3 buckets. This facilitates consistency, demonstrates control and reduces effort. This could be implemented as an AWS CloudFormation template or an AWS Cloud Development Kit (AWS CDK) construct. To learn more, read about this ‘building block’ approach in the GxP Systems on AWS whitepaper.

Any deployed resources need to be maintained in a qualified state. This requires that services remain configured according to the requirements you defined during qualification. This can be done through the use of AWS Config rules, which monitor your service configurations and highlight any non-conformant resources.

For example, there is an AWS Config managed rule which checks versioning is enabled on your bucket. There are other rules which check the configuration to confirm that logging is turned on, encryption is enabled, and more. These rules help maintain the qualified state of your resources. There is even a 21 CFR part 11 conformance pack, which is a collection of rules specifically aimed at assisting with that regulation.

Conclusion

You shouldn’t need to qualify the standard, documented functionality of AWS services, but you do need to qualify your service configuration and controls. That configuration should consider both security and data integrity requirements.

The approach to defining configuration and controls to satisfy regulatory requirements described in this blog can be applied to any AWS service. To know what AWS can do for you contact your AWS Representative.

Further Reading

Ian Sutcliffe

Ian Sutcliffe

Ian Sutcliffe is a Global Solution Architect with 25+ years of experience in IT, primarily in the Life Sciences industry. A thought leader in the area of regulated cloud computing, one of his areas of focus is IT operating model and process optimization and automation with the intent of helping customers become Regulated Cloud Natives.

Senthil Gurumoorthi

Senthil Gurumoorthi

Senthil Gurumoorthi is the Global Security Assurance Lead, HCLS – Security & Compliance. He has over 19 years of diverse experience in global biopharmaceutical & healthcare business technologies with leadership expertise in Technology delivery, Risk, Security, Health Authority Inspection, Audit and Quality Management. Experienced speaker, panelist and moderator on HCLS Security, Quality & Compliance topics; passionate to modernize quality & compliance in HCLS industry. Senthil is also a member of the FDA-Industry CSA Team and is a contributing author of the ISPE GAMP GPG Data Integrity by Design. He holds B.E in Electronics & Communication from PSG College of Technology, MS in Electrical Engineering from New Jersey Institute of Technology, and Masters in Business Administration (MBA) from Imperial College London.