AWS Contact Center
Hear, Here at City of London: Build a DIY audio tour with Amazon Connect
Co-authored with Dr. Mark A. Tovey, Postdoctoral Fellow, Western University, in Collaboration with the Culture Office City of London
The city of London, Canada has partnered with Hear, Here, an audio interpretive sign project founded in La Crosse, Wisconsin. Together, they are setting up the first Canadian Hear, Here project.
This blog post describes the project, which is built on Amazon Connect, the self-service, cloud-based contact center service from AWS. It then shows you how to build your own do it yourself (DIY) audio tour.
Hear, Here collects stories and voices of London’s past. It then delivers them in bite-size audio clips associated with historical and cultural sites around the city. People who call the phone number on a Hear, Here interpretive sign can hear a story that happened at the location of the sign.
The city will display Hear, Here street signs in three London, Ontario, neighborhoods. Each sign has a story number printed on it (STORY 3), along with a phone number to call. A passerby can call the phone number, and enter the story number that is printed on the sign. They hear the narrative of an event that happened on that spot, in the voice of an original participant. Imagine filling neighborhoods with stories. When you know the stories of a place, a simple stroll becomes a journey for the imagination.
Finally, after a listener has heard the story, they can stay on the line to leave their own story of that location or any other location in the neighborhood. If it fits the objectives of the project, then the story is added to the project with a new sign. Otherwise, it can be added to an existing sign location. In this way Hear, Here is responsive to users and becomes community-generated.
The following are requirements for the project:
- Pedestrians must be able to call the number printed on the sign (STORY 3). Then the phone system must welcome the caller, and direct the caller to enter the number of the story that they wanted to hear. (“Please enter a story number”). Entering a valid story number on their keypad should play the corresponding story.
- Adding stories and linking stories together should be comprehensive. This is because the project library would be maintained by university or high school students, neighborhood associations, city heritage boards, or any other arts and culture venue.
- Implementing the automated IVR should be comprehensive for non-technical users. It should be possible to build it quickly.
- Implementing a voice mail functionality that lets callers become part of the project.
- Limiting the costs and avoiding upfront payments.
In the public sector, bringing new services to the community based on telephony interactions is sometimes difficult to implement. For instance, you have to design the system for the high-water mark of users and pay for it all upfront. There is also integration of many infrastructure components, which adds complexity to the design and causes delay in implementation.
So how can you deploy all of this in a short timeframe and on a tight budget?
How to build an automated audio guide in an hour or less
Enter Amazon Connect, the AWS self-service, cloud-based contact center service. It makes it efficient for any business to deliver better customer service at lower cost, with no specialized skills required. Check out this blog post if you don’t believe us: So easy, an eight-year-old can use it.
The next sections discuss the following steps to enable your own automated audio guide:
- Prepare the environment
- Upload your recordings
- Set up a database table to dynamically select the recording
- Create a function to look up the correct record from the database
- Create the Contact Flow
- Test and validate the solution
Prepare your environment
To build the contact flow, you must get your environment ready.
First, log in to your AWS account. Create an Amazon Connect instance, if you haven’t done so already. For more information, see Amazon Connect – Customer Contact Center in the Cloud. The AWS Free Tier lets you gain experience with the AWS platform, products, and services.
Use an AWS Lambda function to dynamically fetch the recording to play to the caller. AWS Lambda is a serverless compute service that automatically runs your code without requiring you to provision or manage servers. For more information about Lambda basics, see this AWS Lambda tutorial.
Finally, use Amazon DynamoDB, a fully managed NoSQL database (key-value and document database) that delivers single-digit millisecond performance. For more information, see this Amazon DynamoDB tutorial.
Now that you have the basics, you are almost ready to build your first contact flow. But next, upload your library of recorded stories. Make sure to keep your design efficient. This includes scaling and linking stories by creating a supporting Amazon DynamoDB table and AWS Lambda function to dynamically retrieve the story recording.
Upload your recordings
Upload your recordings into the Amazon Connect prompt library. To do so:
-
-
- Open your Amazon Connect instance.
- In the left pane, choose Routing, Prompts.
- Choose Create new prompt.
- Create your prompts by recording your voice or importing any audio file that you would like to play. You can also browse from a drive attached to your PC. Only 8 KHz .wav files that are less than 50 MB are supported for prompts.
- In Step 1: Upload or record your prompt, choose the Upload tab.
- Choose file.
- Choose the file that you want to upload, got to Step 2, and enter the name of your prompt (for example story_1), and choose Create.
- Repeat steps 3 to 6 in this section to create a prompt for every file that you need in your library. Your audio files, or stories, are uploaded. Take note of each story Prompt ID to complete the next steps.
- In the Prompt pane, choose each of your prompts and copy the Prompt ID, as shown in the following example:
- Paste the Prompt ID into a text file. Also include the unique code that you want to assign to the story (for example, 12345). This code is needed later.
-
Create the Amazon DynamoDB table
Now create an Amazon DynamoDB table to store all of your story codes. This includes their Prompt ID and the code of the next recording that needs to be played if the caller asks for more.
- Open the Amazon DynamoDB console.
- Choose Create table.
- For Table name, enter stories. For Primary key, enter storyid.
- Leave Use default settings selected, and choose Create.
- After creating the table, add the story codes that you uploaded. Choose stories, then choose the Items tab.
- Choose Create Item.
- Enter the story code; for example, 12345.
- Choose the “+” sign to the left of your storyid, choose Append, then choose String.
- In the FIELD box, enter promptid. In the VALUE box, enter the Prompt ID that you saved in the text file for story 12345.
- If there is a linked story, enter nextstoryid in the FIELD box. In the VALUE box, enter the linked story code, for example: 67890. Choose Save.
- Repeat steps 5 to 10 for all stories that you want to call from your dynamic prompt.
Create the AWS Lambda function
Next, create an AWS Lambda function to dynamically fetch the recording for the story requested by the caller. In the Lambda console, create a new AWS Lambda function with the right permissions to read Amazon DynamoDB. This means that you need to assign the appropriate IAM role. For more information about proper permissions, see the AWS Lambda Permissions Model documentation.
Your role should look like the one shown in the following example:
The function uses Python 2.7 as Runtime.
The code used is the following:
from __future__ import print_function
import boto3
from boto3.dynamodb.conditions import Key, Attr
# AWS service constant definitions
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('stories')
def lambda_handler(event, context):
# Setting Variables based on received event
story_id = event['Details']['Parameters']['storyid']
next_story_id = event['Details']['Parameters'][ 'nextstoryid']
next_story = event['Details']['Parameters'][ 'nextstory']
# in case the customer had requested to disconnect the call (storyid=9) we set the lookup key to 9
if story_id == "9":
story_key = story_id
# Check if the request wast to listen to the next story, in this case the parameter nextstory should be equal to 99
elif next_story == "99":
# set the look up key to 'err' in case there is no nextstoryid passed as parameter
story_key = 'err'
# check if nextsoryid holds a real value and in case set the look up key appropriately
if next_story_id <> None and next_story_id <> '':
story_key = next_story_id
# there is no next story request so we use the storyid parameter to lookup the prompt to play
else:
story_key = story_id
# query the DynamoDB table to retrieve the promptid of the recording
items = table.query(KeyConditionExpression = Key('storyid').eq(story_key))
# check if the query found a corresponding story or if there was an error in the input
if not items['Items']:
#no record with the input storyid return an error code
resultset = {"promptid":"xxx","storyid":"err"}
else:
#found the code set the resultset to return to Connect
return items['Items'][0]
return resultset
After the function is created, paste the previous code into the code panel, and choose Save.
You can finally build the contact flow and open your system “for business.”
Create the contact flow
Next build the contact flows. A contact flow defines each step of the experience that customers have when they interact with your contact center. In this section, you create two contact flows. The first contact flow includes:
- A welcome message.
- An initial menu in which the caller can choose the story to listen.
- An invocation to the AWS Lambda function created in the previous section.
- Eventually the dynamic prompt that will play the correct recording.
The second contact flow informs the callers about the options available to them after they listen to the first story.
To start:
- Open Amazon Connect, and choose the instance that you have previously created.
- In the left pane, choose Contact flows. In the AWS Lambda section, choose the function that you created (ReadPromptID) from the drop-down menu, and choose Add Lambda Function. Your Amazon Connect instance now has the permissions to invoke your AWS Lambda function. Log in and let’s start building.
- In the left pane, choose Routing, Contact flows, Create contact flow.
First, you want your caller to be greeted, so:
- Drag and drop in your flow the Play prompt block from the Interact palette.
- Choose the title of the Play prompt block to open the setting for the block.
- Choose Text to speech (Ad hoc), and type the text to welcome your caller:
In this case, you do not use a pre-loaded audio file. Instead, the Amazon Connect integration with Amazon Polly, AWS text-to-speech service, is used.
Now that you have welcomed the caller, you need to ask which story they want to listen to. You need to check whether they have already listened to one story and asked for the next. To do so, use a contact attribute to store the contextual information.
In the Contact flow designer:
- Add a Check contact attribute block from the Branch section.
- Open the setting for the Check contact attributes block.
- In the Type drop-down menu for Attribute to check, choose User Defined.
- In the Attribute text box, type nextstory.
- Choose Add another condition.
- Choose Equals, and enter 99.
- Choose Save.
In case the check does not match, the next step is to ask the customer to enter the story code on their keypad. To do this:
- Choose Store customer input from the Interact section.
- Connect the new block to the No Match branch of the Check contact Attributes block.
- In the setting window for the block, choose Text to speech (Ad hoc) in the Prompt section and enter the text for the menu:
- Choose Custom, with Maximum Digits set to 5 and Delay between entry set to 3, in the Customer Input configuration.
- Choose Save.
If an error occurs, add a Play prompt and gracefully terminate the call. If successful, you call your AWS Lambda function:
- Add the Invoke AWS Lambda function from the Integrate section.
- In the Invoke AWS Lambda function setting, choose Select a function. From the menu, choose the name of your Lambda Function.
- In Function input parameters, add three parameters by using the following settings:
First parameter to add:- Use attribute
- Destination key = storyid
- Type = System
- Attribute = Stored customer input
Second parameter to add: - Use attribute
- Destination key = nextstory
- Type = User Defined
- Attribute = nextstory
Third parameter to add: - Use attribute
- Destination key = nextstoryid
- Type = User Defined
- Attribute = returnednextstoryid
- Connect the Error branch to the Play prompt to gracefully disconnect the call.
- Connect the Success branch (successful flow) into a Set contact attribute block.
- Connect the first Check contact attributes block =99 branch to the Invoke AWS Lambda function.
- In the latest added Set contact attributes block settings, add three attributes to save by using the following settings:First attribute:
- Use attribute
- Destination key = returnedpromptid
- Type = External
- Attribute = promptid
Second attribute: - Use attribute
- Destination key = returnedstoryid
- Type = User Defined
- Attribute = storyid
Third attribute: - Use attribute
- Destination key = returnednextstoryid
- Type = User Defined
- Attribute = nextstoryid
At this point, branch the flow based on the contact attribute returnedstoryid (use a Check contact attribute block). If it is =err it means the story code entered has no match in your table. Therefore, loop back and ask the caller to reenter the story code. For example, say “Sorry, I could not find the story requested. Could you please try again?” Otherwise, play the recording that the caller dynamically selected.
To properly configure the Play prompt block:
- Open the block settings window.
- Choose Select from the prompt library, Select dynamically.
- Set Type to User Defined, and Attribute to returnedpromptid.
Your contact flow is almost completed and ready to be used. For the moment, link the last Play prompt block to a Disconnect/hang up block. In this way, you are able to Save & Publish it.
You need this step to complete your audio guide because you must implement a menu of follow-up actions for when the caller has done listening to the selected story. To keep the flow design efficient to read, create this menu in a different Contact Flow, named PostReadStoryFlow. Later, you transfer the call from the initial flow to this one.
In the new flow, start with dragging a Get Customer Input into the work canvas. Configure the menu with two options:
-
- Press 1 or stay on the line (Default/Timeout) to listen to the next story.
- Press 2 to enter a different story code.
To complete this second contact flow, follow these steps:
- Configure the Get customer input settings:
- Choose Text to Speech (Ad Hoc) and enter the menu to read to the caller, for example:
“Press 1 or stay on the line (Default/Timeout) to listen to the next story. Press 2 to enter a different story code”.
- Choose the tab DTMF.
- Leave the default Set Timeout.
- Add two options, labeled respectively 1 and 2.
- Choose Text to Speech (Ad Hoc) and enter the menu to read to the caller, for example:
- Set the contact attribute nextstory to the value 99 (listen to the next story) or 0 (listen to a new story), and then loop it back to the ReadStoryFlow with a Transfer to Flow.
- Open the Transfer to Flow settings, and from the Select a flow drop-down menu, select the ReadStoryFlow.
- Publish this flow.
In the ReadStoryFlow, delete the connection from the dynamic Play prompt to the Disconnect/hang up block. Instead, link it to a Transfer to Flow that you configure to use the PostReadStoryFlow, and republish the flow.
Your flow is now completed. Next, you attach it to the number that you claimed so that users can start calling you. To do so:
- Choose Routing, Phone numbers.
- Select your phone number.
- Assign to it your new contact flow.
- Choose Save.
Try the solution
Congratulations, you’re done! Your first audio guide application is live. Call your number and listen to the stories that you uploaded.
To try the solution, you built following the steps in this post:
- Call your number.
- Listen to your Welcome prompt.
- When prompted, enter on the keypad the five digits of the story code that you want to listen to.
- When the recording is fully played, choose the next story if you have linked stories.
A new recording starts playing automatically.
Instead, if you are curious to see how our system works, take out your phone and ignite your imagination.
You are walking the beautiful streets of the AWS Cloud in London, Ontario. You see a sign for our project:
This first sign contains a telephone number (+ 1 844-865-7597) and 2 story codes: 1 and 2.
Call and listen to the story. If you listen to story number 1, press 1, or stay on the line to automatically listen to story 2.
Continue exploring. You will discover other signs. Each one has a story code (3, 4 or 5).
Follow the instructions to listen to these stories. Press 9 when you want to disconnect.
We hope you enjoy your virtual tour of our Cloud Contact Center.
Conclusion
The solution that we built together in this post implements an audio guide without requiring a specialized device. Anyone with a mobile phone can listen to the recording. You can now use this solution in your own environment to let people listen to your content while walking through your museum or in the streets of your city.
It was important that the system be straight-forward to set up, inexpensive to start and run over the long term, and comprehensive enough that students could upload new stories. It also had to be a comprehensive process to add new cities and locations when the time comes.
This approach, with a few refinements, has met the needs of the first Canadian Hear, Here project, in London, Ontario in Canada. If cities in other parts of the world want to adopt Hear, Here, Amazon Connect can follow along. It is straight-forward and inexpensive to add new instances and new phone numbers. It is also easy to add new stories. After the system is set up and running, university or high school students, neighborhood associations, city heritage boards, or any other arts and culture venue affiliated with the project can collect new stories and upload them. If you would like to set up a Hear, Here project in your community, please contact Dr. Ariel Beaujot, Executive Director of Hear, Here.
Before the project goes live, we do plan some other enhancements. For example, moving to full conversation mode by integrating our contact center with Amazon Lex, the AWS chatbot solution. If you are interested in modifying your contact flow to take advantage of Amazon Lex, see the blog post by Randall Hunt, New – Amazon Connect and Amazon Lex Integration.
Hear, Here London officially launches on Saturday, April 27, 2019 from 1:00 to 4:00 PM, at 255 Horton Street West, London, ON. Thereafter, the project will be available in London for all to view.