AWS Machine Learning Blog

Running fast.ai notebooks with Amazon SageMaker

Update 25 JAN 2019: fast.ai has released a new version of their library and MOOC making the following blog post outdated. For the latest instructions on setting up the library and course on a SageMaker Notebook instance please refer to the instructions outlined here: https://course.fast.ai/start_sagemaker.html


fast.ai is an organization dedicated to making the power of deep learning accessible to all. They have developed a popular open source deep learning framework called fast.ai. This technology is based on the deep learning library PyTorch, which is focused on usability and allows users to create state-of-the-art models with just a few lines of code in domains such as Computer Vision, Natural Language Processing, Structured Data and Collaborative Filtering.  They also have a very popular online course designed to enable developers with no background or experience in machine learning to learn and use their library to deploy state-of-the-art deep learning models in just a few weeks.

One of the key benefits of Amazon SageMaker is that it provides in just one click, a fully managed machine learning notebook environment based on the popular Jupyter open source notebook format. This blog post describes how you can deploy the fast.ai library and example Jupyter notebooks onto the Amazon SageMaker hosted notebook, to train your fast.ai-based deep learning models. This is useful if you are running through the fast.ai online course or want to build and train your own fast.ai based deep learning model in your custom application. We will walk you through all the steps needed to automate the setup and configuration of your custom fast.ai environment on your SageMaker notebook instance.

Step 1: Create an Amazon SageMaker notebook lifecycle configuration

Amazon SageMaker  provides you the ability to manually install additional libraries on your notebook instances. However, after your notebook instance is terminated, these additional customizations are removed as well. This means that you need to manually add them again when you restart your notebook instances. However, the recently launched lifecycle configuration feature in Amazon SageMaker now allows you to automate these customizations so that they can be applied at different phases of the lifecycle of an instance.

In our example, we use the lifecycle configuration feature to install the fast.ai library and associated Anaconda environment every time the notebook instance is started, thus avoiding any repetitive installation steps each restart cycle.

  1. Open the Amazon SageMaker console at https://console.aws.amazon.com/sagemaker/.
  2. Choose Notebook, and then Lifecycle configurations in the left navigation pane. Then choose the Create configuration button to create a new configuration.
  3. Give a name to your Configuration Setting. In our example we use FastaiConfigSetting.
  4. In the Start notebook panel, copy the following bash script content. This script will clone the fast.ai GitHub repository, then set up the fast.ai conda environment with all the needed Python packages and dependent libraries. 
    Note: The script will take some time to set up all the required packages and libraries, so we run it as a background process to overcome the five-minute maximum time limit for a lifecycle configuration script.

    #!/bin/bash
    
    set -e
    
    echo "Creating fast.ai conda enviornment"
    cat > /home/ec2-user/fastai-setup.sh << EOF
    #!/bin/bash
    cd /home/ec2-user/SageMaker/fastai
    conda env update
    source activate fastai
    echo "Finished creating fast.ai conda environment"
    EOF
    
    chown ec2-user:ec2-user /home/ec2-user/fastai-setup.sh
    chmod 755 /home/ec2-user/fastai-setup.sh
    
    sudo -i -u ec2-user bash << EOF
    echo "Creating fast.ai conda env in background process."
    nohup /home/ec2-user/fastai-setup.sh &
    EOF
    

    It should look something like the following screenshot:

  5. Now choose the Create notebook tab and copy the following script. This lets the notebook clone the fast.ai GitHub repository when the notebook is created.
    #!/bin/bash
    
    set -e
    
    sudo -i -u ec2-user bash << EOF
    git clone https://github.com/fastai/fastai.git /home/ec2-user/SageMaker/fastai
    mkdir /home/ec2-user/SageMaker/fastai/courses/dl1/data
    EOF
    

    It should look something like the following screenshot:

  6. Now choose the Create configuration button to create the new lifecycle configuration for our fast.ai notebook. You will now see your newly created lifecycle configuration with the given name and an Amazon Resource Name (ARN).

Now we are ready to create our Amazon SageMaker notebook instance to build and train our fast.ai models.

Step 2: Create an Amazon SageMaker notebook instance

An Amazon SageMaker notebook instance is a fully managed ML Amazon EC2 instance running the Jupyter Notebook software. For more information, see Notebook Instances and Notebooks.

  1. Open the Amazon SageMaker console at https://console.aws.amazon.com/sagemaker/.
  2. Choose Notebook instances, and then choose Create notebook instance.
  3. On the Create notebook instance page, provide the following information:
  • For Notebook instance name, type FastaiNotebook.
  • For Instance type, choose ml.p2.xlarge or ml.p3.2xlarge because we will require an instance with a GPU given we will train our fast.ai models on the same notebook instance. The ml.p2.xlarge instance has a Nvidia K80 GPU and the ml.p3.2xlarge  has the more powerful Nvidia V100 GPU so training times for our models will be accelerated compared to other instance types. The ml.p3.2xlarge has a higher per hour charge than the ml.p2.xlarge ($4.284 vs $1.26 per hour in the N. Virginia Region), but you will most likely be able to train your deep learning models faster so the actual cost of training your model may be lower as you do not need to run the instance as long. The default limit per account of ml.p2.xlarge instances is 1 and for the ml.p3.2xlarge instances it is 2, so ensure that there are not too many other notebooks of the same type running in the same account to avoid hitting this limit or create a support case with AWS Support to request to have the limit increased. Details of the Amazon SageMaker default limits per account can be found here: https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_sagemaker.
  • For the IAM role, if you have an existing role for Amazon SageMaker notebooks then you can use it or else create a new IAM role by selecting the option Create a new role and accepting the default options that provide access to any Amazon S3 bucket with “sagemaker” in the bucket name.
  • The VPC is optional here and not necessary. This option should be selected if you need to access resources, such as databases or other systems, running or accessible from your VPC. In our case we don’t have dependencies on external systems.
  • For Lifecycle configuration, select the Lifecycle configuration that was created in Step 1.
  1. Choose Create notebook instance. This launches a ML instance, in this case, a notebook instance, and attaches an ML storage volume to it.
  2. It takes a few minutes to create the notebook instance and run the lifecycle configuration scripts. To check the logs of the lifecycle configuration scripts, you can view them in Amazon CloudWatch Logs with the Log Group name: /aws/sagemaker/NotebookInstances. There should be two CloudWatch LogStreams: one for the when the notebook instance is created called FastaiNotebook/LifecycleConfigOnCreate, and another when the notebook instance is started called FastaiNotebook/LifecycleConfigOnStart. Open the log stream named FastaiNotebook/LifecycleConfigOnStart and wait till you see the log text “Finished creating fast.ai conda environment” before proceeding to Step 3 :

Step 3: Run your fast.ai notebooks

Now that the lifecycle configuration scripts have installed the fast.ai library and dependencies, you are ready to run the fast.ai lesson notebooks on your SageMaker notebook instance.

  1. Open the Amazon SageMaker console, go to the Notebook instances pane, and choose Open for the notebook named FastaiNotebook as shown in the following screenshot:

    A browser tab opens showing a Jupyter notebook page similar to the following:
  2. Navigate to the fastai/courses/dl1 directory to get access to the Jupyter notebooks from the fastai MOOC course. We will open the Lesson 1 notebook named lesson1.ipynb as shown in the following screenshot.
  3. You will be prompted to select a kernel, so select the conda_fastai option as shown in the following screenshot:
  4. If you are running any of the Jupyter notebooks included in the fast.ai library, then you will need to add a code cell to import the torch library with the command import torch before the other fast.ai import statements. If this is not completed, then you might get an error such as “ImportError: dlopen: cannot load any more object with static TLS.” The following is an example of a notebook with the code cell added:
  5. You are now ready to run through the notebook. Remember to download the data zip file and place it into the directory /home/ec2-user/SageMaker/fastai/courses/dl1/data.

Note: Ensure that you stop your notebook instance when not using it to avoid incurring unnecessary costs.

Conclusion

In this blog post we have shown you the steps to automatically set up the fast.ai library and its associated dependencies on an Amazon SageMaker notebook using the lifecycle configuration feature of SageMaker. Now you have an environment to work through the lessons in the fast.ai course and even build your own fast.ai based models. In our next blog post we’ll show you how to deploy your fast.ai model to the Amazon SageMaker model hosting service, which provides an HTTPS endpoint that uses your model to provide inferences to downstream applications.

Don’t forget to shut down the notebook instance when you have finished to avoid unexpected charges.


About the Author

Matt_McClean_100Matt McClean is a Partner Solution Architect for AWS. He works with technology partners in the EMEA region providing them guidance on developing their solutions using AWS technologies and is a specialist in Machine Learning. In his spare time, he is a passionate skier and cyclist.