AWS Open Source Blog

How to build a scalable BigBlueButton video conference solution on AWS

BigBlueButton is an open source video conference system that supports various audio and video formats and allows the use of integrated video-, screen- and document-sharing functions. BigBlueButton has features for multi-user whiteboards, breakout rooms, public and private chats, polling, moderation, emojis, and raise-hands. In this post, we will explain how AWS customers who are looking for a self-managed and open source software-based video conference solution can leverage AWS to build and deploy a scalable BigBlueButton setup. We’ll briefly explore the AWS services, features, and open source components integrated into the architecture, and we will explain how to use the necessary scripts and stack templates.

BigBlueButton integrates with a variety of third-party tools, such as Moodle, Sakai, Drupal, Joomla, or WordPress; however, we will stick to BigBlueButton’s native integration this post.

The demonstrated setup is a starting point from which we can implement our own learning and video conference platform based on BigBlueButton. Scalability and elasticity will be set up following best practices with the trade-off of BigBlueButton’s capabilities. The open source components align as much as possible to their default settings to ensure the setup stays easy to understand when going through each component’s own documentation.

We use Infrastructure as Code (IaC), which is built as modular as possible so that we can easily replace parts or extend it according to our needs. IaC is the process of managing and provisioning infrastructure using machine-readable definition files. It’s a conceptual approach and closely connected to DevOps and cloud. For the deployment of the infrastructure and application, we will use AWS CloudFormation, an AWS-native IaC service.

Disclaimer

Please note that running this code will deploy software Scalelite, licensed under AGPL-3.0, in the user’s account.

Also, be aware that the AWS Free Tier does not cover this deployment. Use the AWS pricing calculator to perform an estimate beforehand.

Architecture overview

BigBlueButton video conference solution on AWS architecture overview

We base the infrastructure we’re building on AWS service offerings. Let’s dive into them and learn what we’ll deploy.

The base of the overall deployment is AWS CloudFormation, an AWS service providing a common language to write and deploy AWS and third-party application resources. As previously mentioned, AWS CloudFormation is an AWS-native IaC service that will help us to deploy our infrastructure. It also ensures we limit the operational and management burden in the future by giving us a central point for updating the deployed components and their configuration.

We will deploy into one Region and multiple Availability Zones. With AWS global infrastructure, and the concept of Regions and Availability Zones, we can build highly available and globally scalable infrastructure fitting to our needs.

The infrastructure deploys into a dedicated Amazon Virtual Private Cloud (VPC). We’ll use Amazon VPC as the isolated section of the AWS cloud for our video conference platform. Amazon VPC provides features such as firewalls (security groups), subnets, routing, and NAT gateways. This means we can design and build a secure, scalable and reliable network infrastructure.

For our deployment, we will use private and public subnets for each tier of the application layer and security groups to limit service exposure. Additionally, we will use NAT gateways to allow private subnets outbound internet access for maintenance traffic.

Amazon Elastic Compute Cloud (Amazon EC2) is an AWS core service offering for compute. It’s a web service, which will provide secure and elastic compute capacity for our deployment. We’ll use Amazon EC2 to deploy our BigBlueButton application, as well as the TURN server. The TURN (Traversal Using Relays around NAT) protocol assists with traversal of network address translators (NAT), or firewalls, for video/audio traffic. We’ll use Coturn, an open source TURN server implementation, in our infrastructure.

To run the containers of our Greenlight web interface and Scalelite application load balancer, we choose Amazon Elastic Container Service (Amazon ECS). Amazon ECS is a fully managed container orchestration service, which will ensure our front-end/web interface and meeting distribution will be scalable and highly available. It is feature-rich and allows easy, native integration into other AWS services.

Greenlight is BigBlueButton’s native web interface for user, recording, and meeting management. It will be the entry point into our video conference platform.

Scalelite is an open source load balancer tailored for the usage with BigBlueButton. It manages our pool of BigBlueButton application servers and distributes meetings to the least loaded instance in the pool.

Amazon EC2 Auto Scaling and AWS Auto Scaling are services we’ll use to scale the application- and container-based workloads according to our reliability and performance needs.

Amazon EC2 Auto Scaling maintains to automatically add or remove Amazon EC2 instances based on usage, and monitors health and availability of our fleet of instances. This will ensure our Amazon ECS worker nodes, BigBlueButton application servers, and the TURN server are up and running, and the scale fits our performance needs. Although Amazon ECS worker nodes scale automatically, the BigBlueButton servers scale in a planned manner by operational tasks.

AWS Auto Scaling monitors and automatically adjusts capacity of applications based on defined targets. Using AWS Auto Scaling ensures applications have the right resources at the right time. In our deployment, AWS Auto Scaling monitors and automatically scales our container-based web interface and Scalelite application load balancer.

Because we will use Amazon ECS, we will leverage Amazon ECS capacity providers to align to the performance needs of the applications. Amazon ECS capacity providers enable us to manage the infrastructure the tasks in our Amazon ECS cluster use.

We’ll use Amazon Elastic File System (Amazon EFS) for shared storage of all recordings done in video conferences. Amazon EFS provides a simple, scalable, and fully managed NFS file system that we’ll mount into our application instances and web interface containers. This will enable us to distribute our meeting recordings from the application to the front end, independently of the BigBlueButton application instance on which the meeting actually has been recorded. Amazon EFS supports encryption by default, at rest and in transit. We’ll enable both to secure recordings with the highest standards.

To handle meeting schedules and meeting login information in a scaled setup, we will use Amazon Aurora and Amazon ElastiCache as central and shared datastores for Greenlight and Scalelite.

Aurora is a MySQL- and PostgreSQL-compatible database service built for the cloud. It combines the performance and availability of enterprise databases at the cost of an open source database. Because it’s a fully managed service, Aurora will take the burden of operating a highly available relational database from us.

ElastiCache offers a fully managed Redis or Memcached. We’ll use it with Redis as the key-value store for our setup. Similar to a server load, Redis will hold each application instance ID, related instance public URL, shared secret, and the current relevant metrics. It also keeps internal meeting IDs. Redis will serve as the datastore from which Scalelite gets information on where to forward users to when they want to attend to a video conference.

To distribute traffic to our setup we’ll use Amazon Route53, AWS Certificate Manager (ACM), and Elastic Load Balancing (ELB).

Route53 is a highly available and scalable DNS service. We’ll automate register and unregister of our BigBlueButton application servers and the TURN server DNS records using AWS API calls and Route53. This is a neat and easy way to use the elasticity of AWS.

ACM is used to provide SSL/TLS certificates for the web interface and Scalelite application load balancer, which are published and load balanced using ELB. ACM is a service that allows us to easily provision, manage, and deploy public and private SSL/TLS certificates. It also handles the certificate renewals for us automatically.

ELB is a fully managed service that enables us to distribute incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, or IP addresses. It scales automatically with the demand and provides high availability by default. We’re using ELB to distribute the incoming traffic toward our web interface and Scalelite application load balancers. This ensures the high availability of these setup components.

You might have recognized that we’re not using ACM or Elastic Load Balancing for the application and TURN server instances. This is due to technical considerations: BigBlueButton distributes meetings via Scalelite using a 1-to-1 matching. The traffic, audio, and video signal of one meeting never will be distributed over multiple instances. The TURN server is using a mix of TCP and UDP protocol traffic, as well as unpredictable high ports. To stay close to the BigBlueButton default setup, we’ll stick with the architecture of BigBlueButton and TURN, and use direct network paths to the instances.

Amazon Simple Email Service (Amazon SES) is a fully managed email service that enables us to send emails out of our setup. We’ll use Amazon SES to send out system notifications and meeting invitations.

With AWS Identity and Access Management (IAM), we will manage access to all deployed AWS services and resources securely. Using IAM, we’ll define policies and roles that are following a least privileges principle for our instances and containers. This ensures these deployments and applications only have exact access to the parts of our AWS resources they need to operate. This is an important design principle to ensure a secure setup and limit blast radius if there is any vulnerability in the code of our application components.

AWS Secrets Manager helps us to protect secrets needed by our applications, services, or IT resources. We utilize Secrets Manager in our setup to grant database login credentials to our container workloads and pre-shared security keys to our application stack. We’ll also persist our setup’s administrative login credentials in the service.

AWS Systems Manager gives us visibility and control of the deployed infrastructure. We use Systems Manager’s Session Manager to access the deployed instances on the shell level directly. This eliminates the need to expose any additional services, like SSH, to the public networks if needed for debugging.

To help us to operate and monitor our deployed video conference platform, we’ll use Amazon CloudWatch for metrics collecting, system event monitoring, and alarming. Additionally, we’ll import all container and instance system logs into Amazon CloudWatch Logs and Amazon ECS CloudWatch Container Insights. This gives us a central management console to investigate our infrastructure health.

Now let’s dive deep into how the deployment actually works.

Getting started

Prerequisites

To run the deployment, we need software components installed on the device where we’ll execute the IaC code. It does not matter if you run the scripts on macOS, Linux, or Windows Subsystem for Linux (WSL). The step-by-step guide found in the project repository will also work on Windows PowerShell. Alternatively, you can use the AWS Management Console to deploy the setup. You’ll need to follow the step-by-step guide and adapt the AWS Command Line Interface (AWS CLI) commands to GUI associations.

We’ll stick to the fully automated, script-based deployment for this tutorial.

Install the following:

In your AWS account, set up a public hosted zone in Route53. We’ll use this hosted zone later at the automation to register the front-end and application instances so that they are accessible with a valid SSL Certificates in place. The hosted zone can be either a registered domain or an externally registered domain. You can also use subdomains for the setup and included into Route53.

You’ll need one or more valid email addresses. The setup will send out notifications and alarms to the email address(es) during the deployment and when in operation. Your initial administrator login will also use the email address(es).

The target AWS account should have Amazon SES out of Sandbox mode or, alternatively, validate each email address you want to use for testing and invitations as a destination email address.

Parameters

Pass parameters to the deployment scripts and automation components to influence them. Find a full list of parameters in the GitHub repository. For the tutorial, let’s stick to parameters we pass to the setup.sh script to customize core aspects of the deployment.

Pass the following CLI parameters to the script:

  • -p: the AWS CLI-profile to use
  • -e: the operators or your email address
  • -h: the hosted zone ID the DNS records will be added to
  • -s: the CloudFormation stack name you want to use
  • -d: the FQDN of the hosted zone

If you’d like to alter and/or adjust additional parameters, review the GitHub project’s README for more insights and the bbb-on-aws-param.json file in the root of the project where you can change their values.

Deployment

First, you’ll need to clone the project:

git clone https://github.com/aws-samples/aws-scalable-big-blue-button-example.git

cd ./aws-scalable-big-blue-button-example/

To start deployment, pass the parameters above to the setup.sh script:

./setup.sh -p bbb_deployment -e johndoe@example.com -h XSgu71231 -s bbb-stack -d bbb.example.com

The deployment will take 30-45 minutes.

You can check the state of the deployment any time using the AWS Console or via CLI command:

aws cloudformation describe-stack-events --profile bbb_deployment \
--stack-name bbb-stack

If everything goes through successfully, the console should look similar to the following image:

console view showing "create_complete"

The deployment then splits into main and related nested stacks. Each stack provides output containing the most helpful resource information.

The deployment process adds CNAME records for the front-end—as well as all public available services—to the Route53 hosted zone in your account.

To access our new, shiny video conference platform, navigate to conference.example.com.

This brings us to the landing page provided by Greenlight:

welcome screen with greenlight features buttons

We can sign in using our administrator credentials:

  • Email/User: johndoe@example.com (the email we set as operator email address using the parameter for the setup.sh)
  • Password: A generated password you can find in Secrets Manager

To access the password, select the BBBAdministratorLogin-XXXXX from the list of secrets and retrieve the password value:

select the BBBAdministratorLogin-XXXXX from the list of secrets and retrieve the password value

Alternatively, we can use the AWS CLI to retrieve the value. First, we’ll look for our secret:

aws secretsmanager list-secrets --profile bbb_deployment \
--output text --query 'SecretList[*].Name'

Take note of the name starting with BBBAdministratorLogin-

Next, receive the password value:

aws secretsmanager get-secret-value --profile bbb_deployment \

--secret-id BBBAdministratorLogin-XXXX --query SecretString --output text

Now we’re ready to sign into our video conference platform using the top right button.

After signing in, we will be at our personal landing page.

home room screen with start button

From here, we can administer our setup, or start our first meeting.

Let’s begin our first meeting using the start button on the landing page. First, perform the Echo Test to ensure your headset or microphone/speaker setup is correctly working.

welcome screen

Welcome to your scalable BigBlueButton setup.

Scaling out and scaling in

AWS services and their features make it a simple operations task to scale-out the BigBlueButton setup.

We’ll use Amazon EC2 Auto Scaling to scale-out and scale-in our BigBlueButton application server instances.

Within the console, we are shown the operations console for Amazon EC2 Auto Scaling:

operations console for EC2 Auto Scaling

We’re looking for the group containing “-BBBAppStack-” in its name. Select it and the select Edit.

We’re looking for the Auto Scaling Group containing “-BBBAppStack-” in its name. We select it (screenshot) and click on “Edit”

On the next page, we’ll raise the counts for the Desired, Minimum, and Maximum capacities to test the scaling of the application instances.

screenshot showing raising the counts for Desired, Minimum and Maximum Capacity to test out the scaling of the application instances

Another application will provision and be ready to host video conferences after a few minutes.

We’re not using the Amazon EC2 Auto Scaling feature of dynamically scale-out and scale-in based on metrics. Therefore, we have to be sure to set the scale to a fixed number by using the same amount of instances for the Desired, Minimum, and Maximum capacities.

For more consistent scaling, we can alter the parameter at the bbb-on-aws-param.json at the project folder and run our ./setup.sh again.

Cleaning up

To remove the deployment, we can navigate to AWS CloudFormation and delete the main stack. After we delete the main stack and nested stacks, we then truncate the S3 bucket generated for the Source files, and delete the SourceBucket stack.

Alternatively, we can use the ./destroy.sh script in the repository root.

./destroy.sh takes two parameters:

  • -p: the AWS CLI profile to use
  • -s: the CloudFormation stack name you used
  • ./destroy.sh -p bbb_deployment -s bbb-stack

Optional: Simple, “single”-instance deployment

If we want to deploy a simple Big Blue Button application server which also provides the Greenlight web interface as well as a second instance providing the TURN server. We need to set the following parameter to single and Amazon Cloudformation will not setup any Amazon ECS, Amazon RDS or Amazon Elasticache resources and basically cut down our setup to the bare minimum to demo or test Big Blue Button.

When we look into the bbb-on-aws-param.json file in the root of the project, we can change "BBBEnvironmentType": "scalable", into "BBBEnvironmentType": "single",. Then we can run the previously mentioned deployment using the setup.sh.

If using we are using the console, we can easily change the parameter in the parameter setup/selection screen.

Optional: Serverless deployment for Greenlight and Scalelite

If we want to deploy Greenlight and Scalelite utilizing serverless infrastructure, we can alter one parameter within our bbb-on-aws-param.json to change the way our Amazon ECS cluster is operating from EC2-based worker nodes to AWS Fargate. Change:
"BBBECSInstanceType": "t3a.medium",

to:

"BBBECSInstanceType": "fargate",

and run the deployment as mentioned above using the setup.sh. If we use the AWS Console, the parameter easily can be  changed at the parameter setup/selection screen.

What’s next

The current code example and solution setup offers further improvement opportunities with additional automation and scalability. For instance, automatic scaling of application instances is something to further review. Additionally, the distribution of the core conference components over multiple instances would improve the elasticity and reliability of the system.

These ideas will require further customization of the BigBlueButton, Coturn, and Scalelite setups, and will need deeper BigBlueButton configuration changes.

Conclusion

The post showed how to set up a scaleable video conference solution using AWS services and open source software components. We explored how to utilize IaC via AWS CloudFormation and bash scripting to fully automate infrastructure setups and application deployment.

Feel free to give us feedback and share your thoughts. You’re welcome to do pull requests and please don’t hesitate to use the GithHub issue tracker .

All components of the project are open source—contribute, give feedback, or reach out if you run into issues:

Setup:

Third-party/open source components:

David Surey

David Surey

David Surey is a Solutions Architect located at Berlin, Germany. He has about 20 years of experience in the IT industry, worked in desktop and server support roles, as systems engineer and cloud solution architect. Since years his focus has been on data-center automation and innovation as well as cloud infrastructure development. He is a passionate gamer and loves to tinker around with open-source software frequently. find him on twitter @couchgott

Bastian Klein

Bastian Klein

Bastian is a Solutions Architect in the small and medium sized businesses team based in Munich. He is passionate about containers and DevOps and worked in these fields as Software Engineer and Architect for the past 5 years.