AWS Startups Blog
Infinite Scaling of Selenium UI tests using AWS Lambda
Guest post by Kundan Kumar, Staff SDET, HackerEarth
HackerEarth is a comprehensive developer assessment software that helps companies to accurately measure the skills of developers during the recruiting process. The proprietary tech assessment platform vets technical talent through skill-based evaluation and analytics. Over the years, we have also built a thriving community of 4M+ developers that come to HackerEarth to participate in hackathons and coding challenges to assess their skills and compete in the community.
HackerEarth has a healthy release cycle that warrants frequent deployments. We have built a mature CI/CD pipeline along with all the essential safety nets and automated functional tests being one of them. Sometimes we deploy multiple times a day, and to maintain a high-quality product, we test every commit thoroughly with a good number of automated functional tests (Selenium UI tests). Our functional test count went into the hundreds very quickly, and overtime, it expanded exponentially, often taking somewhere close to 3 hours to run all the UI test cases. You can imagine how hard it is to wait for such a long period to get feedback, so we decided it was time for optimizing at every possible step, be it at each test level, at the framework level, etc. We optimized the tests and improved the overall performance, but not by much.
Now we needed the power of parallel execution. Keeping the Jenkins node as small as possible, we first tried splitting the test cases based on their execution time into two chunks and ran both chunks in two different nodes. From this approach, we saved half of our time, which was close to 1.5 hours, but that wasn’t enough to achieve continuous delivery. We did not want to use more nodes as that was painful from the expense point of view, so we started looking out for alternative solutions.
The solution
We planned to introduce some of AWS’ services to get to the state where we wanted to be. AWS has a serverless service called AWS Lambda, which runs and scales your code with very high availability. We felt that with some tweaks here and there, we could use it to run and execute all the UI tests in parallel to each other regardless of the number of test cases. We chose some more AWS services e.g. Amazon S3 to store all the failure screen-shots and Amazon DynamoDB to maintain custom report data.
So at this point in time, we had decided all the AWS services that we were going to use, and now we were left with the real job: how to make all the UI test components run inside the Lambda environment. Our automation framework tech-stack included Python as the language, Selenium as the tool, Pytest as the framework, and few more Python libraries. We decided to include Serverless framework to deploy and invoke the lambda function.
The Existing Architecture
Before:
After:
Setting up the stage
In general, AWS Lambda provides the run-time language so Python was already available at our service. There was a small problem: Original Chrome and ChromeDriver are not built to run inside the Lambda environment yet because they are too large for Lambda. So we decided to outsource Lambda compatible headless chromium binary and ChromeDriver. We added this as a layer for Lambda by downloading both the binaries from the sources into a folder called chromelayer and added them into the serverless.yml file.
To give you a small introduction to the AWS Lambda layers, you can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package. So we decided to use two layers – One layer that would provide headless chromium along with the ChromeDriver and another layer that would have Lambda function, test-runner function, and requirementts.txt file. Our serverless.yml file for chromelayer looked like this:
# serverless.yml:
service: chrome-layer
provider:
name: aws
runtime: python3.6
region: ap-southeast-1
timeout: 900
layers:
chromedriver:
path: chromedriver
description: chrome driver layer
CompatibleRuntimes: [
"python3.6"
]
resources:
Outputs:
ChromedriverLayerExport:
Value:
Ref: ChromedriverLambdaLayer
Export:
Name: ChromedriverLambdaLayer
So far all the essential components of my automation framework were at the right place and the only thing left was the Selenium and the other supporting Python libraries. There is a beautiful Serverless plugin available called serverless-python-requirements which takes care of all the Python libraries and makes them available for the lambda function during the run time. You need to create your requirements.txt file and add the serverless-python-requirements plugin to the serverless.yml file. Our next step is to add the serverless-python-requirements plugin. Create a package.json file for saving your node dependencies. Accept the defaults, then install the plugin:
$ npm init
This utility will walk you through creating a package.json file.
...Truncated...
Is this ok? (yes) yes
$ npm install --save serverless-python-requirements
To configure the lambda layer serverless.yml file to use the plugin, we'll add the following lines in our serverless.yml:
# serverless.yml
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
The plugins section registers the plugin with the Framework. In the custom section, we tell the plugin to use Docker when installing packages with pip. It will use a Docker container that’s similar to the Lambda environment so the compiled extensions will be compatible. You will need Docker installed for this to work.
The plugin works by hooking into the Framework on a deploy command. Before your package is zipped, it uses Docker to install the packages listed in your requirements.txt file and save them to a .requirements/ directory. It then symlinks the contents of .requirements/ into your top-level directory so that Python imports work as expected. After the deploy is finished, it cleans up the symlinks to keep your directory clean.
So our final Lambda layer serverless file looked like this:
# serverless.yml
service: selenium-lambda
provider:
name: aws
runtime: python3.6
region: ap-southeast-1
timeout: 900
memorySize: 2000
functions:
lambda_handler:
handler: handler.lambda_handler
layers:
- ${cf:selenium-layer-dev.ChromedriverLayerExport}
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
The next important thing is missing and that is the Lambda handler function. This is the function that will be executed by AWS Lambda after we pass the invoke-command. In our case, this Lambda-function will move to the tests home directory and then execute the Pytest command to run the UI test case.
# lambda_handler.py
import os
import pytest
def lambda_handler(event, context):
os.chdir('/<test home dir>/')
pytest.main(<pytest cli command to run the test>)
response = {
"statusCode": 200
}
Return response
We also need to inform our lambda function the path to find the ChromeDriver and setting up the desired capabilities for ChromeDriver is required too. In our case, ChromeDriver is sent to the /opt directory. So enable headless options, provide the ChromeDriver binary location and you are good to go!
It’s time to deploy now but we can not deploy using Serverless until we install it, right? So let’s install Serverless into the system from where we shall deploy our Lambda layers.
Deploy and Invoke Lambda
It’s time to deploy both the layers. So, we will go to both the directories and deploy using below command:
$ cd chromedriver
$ serverless deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3…
...Truncated...
Serverless: Run the “serverless” command to setup monitoring, troubleshooting and testing.
$ cd lambda
$ serverless deploy
Serverless: Generated requirements from /home/serverless-layer/lambda/requirements.txt in /home/serverless-layer/lambda/.serverless/requirements.txt...
Serverless: Using static cache of requirements found at /home/.cache/serverless-python-requirements/28d4063752ead6856480af25ba343ca2ac39ee470d80839e2d9fb53e887f7ed1_slspyc …
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Injecting required Python packages to package...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service selenium-lambda.zip file to S3 (18.1 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
.........
...Truncated...
Great, things are deployed to the AWS lambda. Now we need to invoke our main Lambda function that is responsible to run our actual UI tests. We used Boto, an Amazon Web Services (AWS) SDK for Python to invoke our lambda function. We also wrote a test runner file that will generate a list of all the test case identifiers and pass them to the Lambda function. Since the Lambda function is the same during every async invocation and each time Lambda receives a unique test identifier, all the tests run in parallel because of the async call.
Now the final thing remaining was reporting of course. Since it was all async Lambda invokes, we could not get the test execution logs directly. So we used Pytest hooks to collect each test execution details at the automation framework level and write the details to any NoSql Db for every build-id and that’s where DynamoDB came into the picture. There was no debate around this. Also, it would upload all the screenshots of the failed test cases to our AWS S3 bucket, generate a pre-signed S3 url and all these details were part of the test execution details. Post that we fetch all the test data from DynamoDB, generate our custom report in JSON format because JSON is widely used and it could be consumed by an HTML or many other downstream systems , and make a final test report available to the Jenkins build as the artifact.
Conclusion
The tests that used to take close to 3 hours now takes ~3 minutes (max time taken by any end-to-end UI test). On the other side, We had two Jenkins nodes to run the tests in parallel. They are no longer required and since Lambda scales infinitely use. For our current scale we saved 100s of $ per month on our AWS costs. What more? we achieved infinite scaling. The execution time will remain the same even if we continue to add more tests. AWS Lambda has definitely made life easy. We can now add as many tests as we need without worrying about the execution time.
Saving time as well as money, a win-win situation!