AWS Open Source Blog

Getting started with Travis-CI.com on AWS Graviton2

AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. They power Amazon Elastic Compute Cloud (Amazon EC2) M6g, C6g, and R6g instances, and their variants with local disk storage. Graviton2-based EC2 instances provide up to 40% better price/performance over comparable current generation x86-based instances for a wide variety of workloads, including:

  • Application servers
  • Microservices
  • High-performance computing
  • Electronic design automation
  • Gaming
  • Open source databases
  • In-memory caches

Given the price and performance benefits of Graviton2, customers are adopting it quickly. This means many software teams and open source projects need to build and test their software on both x86 and Arm64 architectures. That is why the General Availability of Travis-CI.com for AWS Graviton2 based instances is exciting: It enables easy and efficient build, test, and deployment of code-artifacts for Arm64-based systems. Travis-CI.com builds using Graviton2 are available as both full virtual machines (VM) or LXD containers. The builds are up to two-times faster than previous Arm64 builds. Adding support for AWS Graviton2-based builds into a project requires adding a few extra lines of code.

In this post, we show how to get started building, testing, and deploying quickly through Travis-CI.com on Graviton2 using two examples. This post assumes that the reader is already familiar with Travis-CI.com’s basic concepts and knows how to link Travis-CI.com with their source repository. If not, please refer to the Travis CI tutorial to get started with Travis CI. (Note: Do not be confused with the deprecated Travis-CI.org, which does not have access to Graviton2.)

Example 1: Add Graviton2 to a generic project

A simple Travis CI build for a C project on x86_64 is shown in the following example .travis.yml file:

language: c

script:
    - make test
    
after_success:
    - ./test

The Travis-CI.com project creates the target test via make in the script section. If the build is successful, it calls the resulting binary test in after_success to determine whether the binary works.

To add building and testing on Graviton2, we must add three fields to the build file: dist, group, and arch.

arch controls which CPU architectures to run the build on; to enable Graviton2, we add arm64-graviton2.
Note that setting arch to arm64 targets the original Travis-CI.com Arm64 build servers. This results in significantly slower builds that can cause some projects to time out.

In addition to setting arch for Graviton2 support, there are three extra variables that must be defined:

  • For VM builds, virt: vm, dist: focal, and group: edge must be set.
  • For LXD builds, virt: lxd, dist can be set to bionic, xenial, or focal.
  • For LXD builds, group: edge must be set.

For our example, we configure building on a VM and set the extra variables as shown below in the build file:

language: c
dist: focal
virt: vm
group: edge
arch:
    - arm64-graviton2
    - amd64

script:
    - make test

after_success:
    - ./test

Otherwise, the rest of .travis.yml remains unchanged and builds now run on both target architectures.

Example 2: Add Graviton2 to build wheels for a Python project

A common use case for Travis CI is to build, test, and distribute binary extensions for interpreted languages, such as Python. Python uses a package format called a “wheel” for distributing both Python-only packages and packages that contain native code compiled for a particular target machine. For instance, during a pip install, you may see it downloading packages named like the following:

  • <package name>-<version number>-<python abi>-<platform architecture>.whl

These files must be built for each target architecture the package supports. Pure Python modules use “any” as a platform architecture. When binaries are also built—for example, by compiling a C library—the wheel becomes platform-specific, containing binaries for x86 or Arm. Travis CI hosted on AWS Graviton2 allows for fast building and distribution of these Python packages that contain native code extensions and target Arm64.

To build a wheel, use a simple script to create the wheel packages. Review the following example, which we reference as the build-wheels.sh script later in this post:

#!/bin/bash
set -e -u -x

cd /src/

# Create binary wheels
/opt/python/${PYTHON}/bin/python3 setup.py bdist_wheel

# Normalize resulting binaries to a common format
for whl in dist/*.whl; do
    auditwheel repair $whl --plat "${PLAT}" -w wheelhouse
done
# After auditwheel, any tests on the resulting wheel can be performed

To build a wheel in Travis CI, we must use Docker to build the wheels in a lowest common denominator environment called ManyLinux. Using the ManyLinux containers results in binaries that are compatible across many different Linux distributions. To build and distribute x86-only wheels for our hypothetical project, our .travis.yml is:

language: python
services: docker

matrix:
  include:
    - python: 3.8
      env: > 
        DOCKER_IMAGE=quay.io/pypa/manylinux2014_x86_64
        PYTHON=cp36-cp36 PLAT=manylinux2014_x86_64
       
install:
  - python3 -m pip install twine
  - docker pull $DOCKER_IMAGE

script:
  - docker run --rm -e PLAT=$PLAT -e PYTHON=$PYTHON -v `pwd`:/src $DOCKER_IMAGE /src/build-wheels.sh
  
after_success:
    - |
      if [ -n $TRAVIS_TAG ]; then
        python3 -m twine upload --skip-existing wheelhouse/*
      fi

This project builds the wheels inside the container by:

  • Mounting the present working directory into the container’s filesystem.
  • Performing the build through the build-wheels.sh script.
  • On success, uploading the artifacts with twine if the git commit that triggered the build was tagged.

To enable building Arm wheels on Graviton2, we add the dist and group tags as specified in Example 1. We also add an Arm64-specific entry in the matrix.include portion for build job. The resulting .travis.yml build specification is:

language: python
services: docker
dist: focal
virt: vm
group: edge

matrix:
  include:
    - arch: amd64
      python: 3.8
      env: > 
        DOCKER_IMAGE=quay.io/pypa/manylinux2014_x86_64
        PYTHON=cp36-cp36 PLAT=manylinux2014_x86_64
    - arch: arm64-graviton2
      python: 3.8
      env: >
        DOCKER_IMAGE=quay.io/pypa/manylinux2014_aarch64
        PYTHON=cp36-cp36 PLAT=manylinux2014_aarch64
       
install:
  - python3 -m pip install twine
  - docker pull $DOCKER_IMAGE

script:
  - docker run --rm -e PLAT=$PLAT -e PYTHON=$PYTHON -v `pwd`:/src $DOCKER_IMAGE /src/build-wheels.sh
  
after_success:
    - |
      if [ -n $TRAVIS_TAG ]; then
        python3 -m twine upload --skip-existing wheelhouse/*
      fi

Conclusion

Working with Graviton2 builders on Travis-CI.com is straightforward for both simple and complex projects. Projects can enable Graviton2 builds on Travis-CI.com now to take advantage of the fast build times. To learn more about AWS Graviton, visit the website. To get started using Graviton2 for your projects, visit Travis-CI.com to register your account and link your repositories.

Geoff Blake

Geoff Blake

Geoff is a Systems Development Engineer at AWS (Annapurna) working on Graviton processors in Austin, Texas. For the past 10 years he's been working on making Arm based servers a reality.

Janakarajan Natarajan

Janakarajan Natarajan

Janakarajan is Systems Development Engineer at AWS (Annapurna Labs) working on Graviton processors in Austin, Texas. Prior to joining AWS he worked at AMD for 4 years doing Linux kernel development for EPYC processors.