AWS Storage Blog
File storage access patterns insights using Amazon FSx for Windows File Server
When using Windows file services, enterprise customers often have the requirement to identify potential security breaches and unauthorized access attempts. Traditionally, customers rely on Windows file server file access auditing to capture and analyze audit records to gain these insights and strengthen their security posture.
Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a rich set of administrative and security features. File access auditing, a feature of Amazon FSx for Windows File Server, allows customers to capture audit event logs and publish them to Amazon CloudWatch Logs or to Amazon Kinesis Data Firehose.
Customers are using file access auditing to meet security and compliance requirements, while eliminating the need to manage storage as logs grow over time. The logs generated by file access auditing can also support detailed forensic analysis.
In this blog, we demonstrate two AWS solutions to perform forensic analysis of file audit records in Amazon FSx for Windows File Server file systems. The first one analyzes audit records using Amazon CloudWatch Logs Insights. The second one utilizes a custom AWS Lambda function and AWS CloudFormation template to stream audit records to Amazon DynamoDB in real time, where they can be analyzed using PartiQL, an SQL-Compatible Query Language for Amazon DynamoDB.
Prerequisites
Our solution requires an FSx for Windows File Server file system with file access auditing enabled. You may follow the steps outlined in the blog post “File Access Auditing Is Now Available for Amazon FSx for Windows File Server” to set up your file system, or refer to the file access auditing documentation.
Setting up the environment
Our file system is joined to Managed Active Directory example.local. We also created a EXAMPLE1 Windows instance seamlessly joined to the same domain. After logging into this instance using Managed AD delegated administrator account Admin, we connected to default FSx for Windows File Server share at \\<FSX-for-Windows-DNS-name>\share and created a new folder Accounting:
Figure 1. Creating default folder Accounting
Only the Admin account has FULL access to this folder; all other users have only READ, WRITE, and MODIFY permissions to files in this folder. We also created a regular user account John Doe (johnd@example.local ), which will be used to generate Success and Failure audit records. We then use this account to login to our EXAMPLE1 Windows instance and generate various audit records. These audit records will later be used to demonstrate forensic analysis using CloudWatch Log Insights and DynamoDB.
Generate SUCCESS audit records
First, we access the Accounting folder and open the TXT file Report 6-12-2021.txt .
We will modify the file by adding the line “Modified 6-14-2021 7:14pm” and save the file as demonstrated in Figure 2.
Figure 2. Modifying report file
Next, we will delete the file Report 6-12-2021.txt – see Figure 3.
Figure 3. Deleting report file
Generate FAILURE audit records
We will then open the content of the \\<FSxFileSystemDNSName>\share and attempt to modify the Accounting Folder permissions. The johnd@example.net account does not have the necessary permissions to modify folder’s access permissions. Therefore, as you see in Figure 4, the access will be denied.
Figure 4. Access denied on attempt to modify Accounting folder permissions
Now, that we generated a number of audit events, we will demonstrate the steps for analyzing audit events with CloudWatch Logs Insights.
Analyze user activity with Amazon CloudWatch Logs Insights
First, we open Amazon CloudWatch Logs Insights from the Amazon CloudWatch console by selecting Logs Insights from the panel on the left:
Figure 5. Opening CloudWatch Log Insights
When enabling file access auditing for FSx for Windows File Server, we can specify a log group for audit records or leave it at the default value of /aws/fsx/windows; to demonstrate the custom Log group we use the value of /aws/fsx/accessaudit. Let’s select this log group.
Figure 6. Selecting file access auditing log group under Logs Insights
Let’s run the following default query, which returns the content of the standard CloudWatch log entry:
fields @timestamp, @message
| sort @timestamp desc
| limit 20
The result returned by the default query is shown in Figure 7:
Figure 7. Retrieving raw data from file access audit log
The log records, returned by the query, present data of the @message body as an XML string. In section 3.1 we will demonstrate how we can parse this data and extract the required information using the Amazon CloudWatch Logs Insights query language.
Analyzing File Access Auditing Data using Amazon CloudWatch Logs Insights Query Language
Amazon CloudWatch Logs Insights support regular expressions for parsing the content of the @message field as described in AWS Documentation: CloudWatch Logs Insights query syntax – Amazon CloudWatch Logs. For more information you can reference the Regular Expression Language Quick Reference.
Using regular expression language, we can parse the content of the @message string to extract some fields that will prove helpful in analyzing user activity. Here is a sample query using regular expressions:
fields @timestamp, @message
| parse @message /<EventID>(?<EventID>(.|\n)*?)<\/EventID>/ | parse @message /<Data Name='ObjectName'>(?<ObjectName>(.|\n)*?)<\/Data>/ | parse @message /<Keywords>(?<Keywords>(.|\n)*?)<\/Keywords>/ | parse @message /<Data Name='SubjectUserName'>(?<SubjectUserName>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessList'>(?<AccessList>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessReason'>(?<AccessReason>(.|\n)*?)<\/Data>/
The resulting output, demonstrated in Figure 8, shows the original @message field parsed into multiple fields such as EventID, ObjectName, Keywords, AccessList, and AccessReason.
Figure 8. Parsed and formatted output
Next, let’s find out who modified the file Report 6-12-2021.txt by running the following query:
fields @timestamp, @message | parse @message /<EventID>(?<EventID>(.|\n)*?)<\/EventID>/ | parse @message /<Data Name='ObjectName'>(?<ObjectName>(.|\n)*?)<\/Data>/ | parse @message /<Keywords>(?<Keywords>(.|\n)*?)<\/Keywords>/ | parse @message /<Data Name='SubjectUserName'>(?<SubjectUserName>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessList'>(?<AccessList>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessReason'>(?<AccessReason>(.|\n)*?)<\/Data>/ | filter EventID = '4663' and ObjectName like 'Report 6-12-2021.txt' and AccessList like '%%4417'
This query returns the audit records which show the EventID = 4663 and AccessList %%4417. This event reflects a successful attempt to modify the file. For more information about Event ID 4663 see 4663(S) An attempt was made to access an object.
The returned data, presented on Figure 9, shows that user johnd modified the file Report 6-12-2021.txt. It also returns the timestamp of this event. This information allows us to discover who modified the file and at what time.
Figure 9. Identifying the user that modified Report 6-12-2021.txt file
Since we enabled auditing of all file system events for Everyone, let’s see how to find out who deleted this file. To obtain this information, we execute the following query:
fields @timestamp, @message | parse @message /<EventID>(?<EventID>(.|\n)*?)<\/EventID>/ | parse @message /<Data Name='ObjectName'>(?<ObjectName>(.|\n)*?)<\/Data>/ | parse @message /<Keywords>(?<Keywords>(.|\n)*?)<\/Keywords>/ | parse @message /<Data Name='SubjectUserName'>(?<SubjectUserName>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessList'>(?<AccessList>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessReason'>(?<AccessReason>(.|\n)*?)<\/Data>/ | filter EventID = '4659' and ObjectName like 'Report 6-12-2021.txt' and AccessList like '%%1537'
The query returns the information about Event ID 4659 (A handle to an object was requested with intent to delete) and the AccessList value of %%1537, which indicates that the right to delete object was exercised. The query result is presented in Figure 10.
Figure 10. Identifying the user that deleted Report 6-12-2021.txt file
As you can see, we discovered that user johnd deleted the file Report 6-12-2021.txt, along with the exact timestamp of this event.
Until now, we have only investigated Success auditing events. However, our file system is also enabled to record Failure auditing events. Let’s find out if we can discover auditing records related to a failed attempt to modify permissions on the folder as shown in Figure 4.
According to 4656(S, F) A handle to an object was requested, we would be interested in the EventID 4656. However, since we’d like to discover who attempted unsuccessfully to modify file permissions, we are interested in the Access Denied events. These events are represented by the Keywords field with the value of 0x8010000000000000. To accomplish this task, we run the following query:
fields @timestamp, @message | parse @message /<EventID>(?<EventID>(.|\n)*?)<\/EventID>/ | parse @message /<Data Name='ObjectName'>(?<ObjectName>(.|\n)*?)<\/Data>/ | parse @message /<Keywords>(?<Keywords>(.|\n)*?)<\/Keywords>/ | parse @message /<Data Name='SubjectUserName'>(?<SubjectUserName>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessList'>(?<AccessList>(.|\n)*?)<\/Data>/ | parse @message /<Data Name='AccessReason'>(?<AccessReason>(.|\n)*?)<\/Data>/ | filter EventID = '4656' and ObjectName like 'Accounting' and Keywords like '0x801'
The output of the query, presented in Figure 11, clearly identifies the user who attempted to modify folder permissions unsuccessfully.
Figure 11. Identifying failed attempt to modify permissions on Accounting folder
Analyze user activity with Amazon DynamoDB and PartiQL Query Language
Streaming audit records to DynamoDB
When you enable file access auditing for your FSx for Windows File Server file system, you are given an option to select CloudWatch log group or accept a default /aws/fsx/windows. A log stream with the name audit_<file-system-id> will be created in the selected group to contain audit records. Log stream naming conventions allow multiple file systems to use the same log group. In our environment the Log group name is /aws/accessaudit
CloudWatch log groups support subscriptions. Subscriptions provide a real-time feed of log records from CloudWatch logs to other services such as Amazon Kinesis, Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems. We will use streaming of CloudWatch log records to AWS Lambda, which will parse the records and store them in a DynamoDB table. First, we have to design and create our DynamoDB table.
DynamoDB table structure
Each DynamoDB table has to have a primary key. The primary key uniquely identifies each item in the table, so no two items can have the same key. The primary key could be simple or composite. A composite primary key is composed of two attributes – partition key and sort key. As we mentioned in previous section, a CloudWatch log group may contain log streams from multiple file systems, so it was logical to select file system identifier as our partition key.
Each audit record contains a system timestamp with the resolution to 10-millionth part of a second. So, we selected this system timestamp as the sort key. Given the very high resolution of this timestamp, the combination of file system identifier and system timestamp is unique and satisfies requirements for DynamoDB table primary key.
DynamoDB supports expiring items by using Time to Live. Time to live expiration is based upon a timestamp in Unix time format with up to a one second resolution. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. Considering that forensic investigations are usually limited in time, we decided to enable this feature. Based on the time to live specified by the user when running our CloudFormation template, our Lambda function will generate a TTL timestamp. Time to live can be in the range of 1 day to 10 years.
Thus, the core structure of our DynamoDB table consists of the following fields:
Field Name | Format | Description |
FileSystemID | string | Identifier of the file system, which is the source of the audit record. |
SystemTime | string | System timestamp as unqualified time in ISO 8601 format with 10-millionth part of a second resolution |
TTL | number | Record expiration time in Unix time format with up to a second resolution |
Audit record format in DynamoDB
File access audit records are stored in CloudWatch logs in their native format, an XML string. Figure 12 provides an example of audit records in native format. Please see Microsoft documentation for details on file system events and file share events formats.
As DynamoDB is a database with advanced search capabilities, storing data as a single string does not make sense as it would force all search operations to scan string data. Also, if a user decides to export search results in csv format, it would be much more usable to have data in a columnar form.
Our Lambda function parses file audit XML records and stores them in our DynamoDB table in columnar format. Moreover, it replaces some hexadecimal data with human-readable descriptions and replaces some coded fields with human-readable actions. Some of the fields that are deemed unimportant for forensic investigation, such as HandleID and ProcessID, are not stored. Table lists fields that our Lambda function writes to DynamoDB table for file access audit record. Please note that not all fields are present in each record.
Figure 12. Windows native file access audit record as XML string
Field name | Format | Description |
AccessList | string | List of requested or granted access in numeric format, e.g. “1537, 4423” |
AccessMask | string | Same as AccessList, but as hexadecimal flags field, e.g. 0x10080 |
Access | string | List of requested or granted access in human-readable format derived from AccessList field, e.g. “Delete, Read Attributes” |
Audit | string | Indicator of Success or Failure of the respective operation |
EventID | numeric | Microsoft-defined Windows event log event ID |
EventRecordID | numeric | Record ID in the file server audit event log |
IPAddress | string | IP address of the client |
ObjectName | string | Refers to the target file, folder, or file share that was accessed |
ObjectType | string | Type of the object that was accessed |
Server | string | Name of the file server that processed the request |
ShareLocalPath | string | Local path to the accessed object on the file server |
ShareName | string | Share name as seen by the client |
SubjectDomain | String | Name of the domain for the user attempting access |
SubjectUserName | string | Name of the user attempting access |
OldSd | string | Old security descriptor of the object |
NewSd | string | New security descriptor |
Deploying the File Access Auditing Solution based on DynamoDB and CloudWatch Subscription Filter
To facilitate streaming of file audit records to DynamoDB, we develop a Lambda function that can parse file audit records and upload them to DynamoDB. To simplify deployment of this solution, we developed a CloudFormation template, which deploys a Lambda function, creates a DynamoDB table to host the log records from selected CloudWatch log group, and subscribes the Lambda function to the respective log group.
The CloudFormation template takes 3 parameters:
- CloudWatch Log Group – The log group specified when configuring File Access Auditing on the FSx file system.
This parameter needs to be specified only once per AWS account per AWS region for all FSx file systems using the given log group. If different FSx file systems are configured to send auditing data to multiple log groups, you will need to deploy this template once for every CloudWatch Log Group. For more information about FSx audit event log destinations, refer to the documentation.
- Log Retention Period – The period of time to retain file access auditing records in DynamoDB. The default retention period is 10 years.
- Log Retention Period Format – Defines Log Retention Period value as Years or Days.
Figure 13 demonstrates entering CloudFormation template parameters.
Figure 13. Entering CloudFormation template parameters.
When you deploy this CloudFormation template, the following resources will be created:
- DynamoDB table with the name FSxAudit-<CloudWatch Log group Name> Please note that any special characters which are not valid for DynamoDB table name will be replaced by dashes “-“. In our case, the Log group name is /aws/fsx/accessaudit, therefore, FSxAudit–aws-fsx-accessaudit DynamoDB table will be created as shown in Figure 14:
Figure 14. DynamoDB table created through CloudFormation template
- Subscription Filter for the CloudWatch Log Group which is the target for the Amazon FSx file access auditing, pointing to our Lambda function. This subscription filter is presented in Figure 15.
Figure 15. Subscription filter created by the CloudFormation template
Analyzing File Access Auditing Data using Amazon DynamoDB and PartiQL Query Language
Earlier in this document we generated some auditable events and provided a number of examples on how to perform investigation of certain activities related to these events. In this section, we will demonstrate how to audit the same activities using the solution based on CloudWatch subscription filter and DynamoDB.
DynamoDB contains File Access Auditing records reflecting user activities related to Amazon FSx files and folders:
Figure 16. Sample File Access Audit records in DynamoDB table
To find out who and when modified the Report 6-12-2021.txt file, we will execute the following PartiQL query:
select SubjectUserName, SystemTime from “FSxAudit–aws-fsx-accessaudit” where contains (“ObjectName”,’Report 6-12-2021′) and contains (“Access”,’Write Data’)
The query, presented in Figure 17, returns jond as the username and SystemTime when the Write Data access has been performed:
Figure 17. Sample query and query result identifying user who modified Report 6-12-2021.txt file
Now, let’s find out who deleted this file and when. To obtain this information, we will run the following query:
select SubjectUserName, SystemTime from "FSxAudit--aws-fsx-accessaudit" where contains ("ObjectName",'Report 6-12-2021') and contains ("Access",'Delete')
Figure 18. Sample query and query result identifying user who deleted Report 6-12-2021.txt file
The query returns the username and time stamp of the Delete access to the file.
Let’s query the auditing information related to a failed attempt to modify permissions on the folder Accounting. The following query returns the user attempting to modify permissions on the Accounting folder, as well as the outcome of this attempt:
select SubjectUserName, SystemTime, Audit from "FSxAudit--aws-fsx-accessaudit" where contains ("ObjectName",'Accounting') and contains ("Access",'Write ACL')
Figure 19. Identifying failed attempt to modify permissions on the folder Accounting
Cleaning up
To remove the resources created to support the solution described in this blog, please delete the CloudFormation template that was deployed in the “Analyze user activity with Amazon DynamoDB and PartiQL Query Language” section of this blog.
Conclusion
Amazon FSx for Windows File Server supports auditing end-user accesses to files, folders, and file shares, enabling you to query, process, store, archive logs, and trigger actions to further advance your security and compliance goals. In this blog, we show how file access auditing can also give you greater visibility into your file system access patterns, and enable you to further strengthen your security posture.
We first parsed Amazon CloudWatch Logs using regular expressions and analyzed those audit events using CloudWatch Insights. Then, we leveraged a CloudWatch Subscription Filter with DynamoDB and ran the analysis using PartiQL with DynamoDB.
The queries we demonstrated in this blog are just examples, and further detailed processing may be accomplished by parsing additional fields and querying their values. Since Amazon CloudWatch Log Group retention periods and DynamoDB Time to Live values can be customized, this approach presents a robust solution to support long-term forensic investigation of file access audit events.
Thanks for reading this blog. If you have any comments or questions, leave a comment in the comments section.