AWS for M&E Blog

Increase engagement with localized content using Amazon Bedrock Flows

Content producers and publishers have large collections with thousands, if not hundreds of thousands, of articles that could be localized for new audiences and geographies to deliver increased engagement to novel and emerging markets. Localization is broadly, the process of transforming content (lexical choice, tone shift and translation) for new audiences from one geography to another.

Human-driven localization cannot scale to the volume required, so automated translation is often used. However, automated localization has been challenging in terms of quality, especially in its ability to contextualize for nuances in a specific market. This often leads to content having a lower engagement rate from consumers when compared to the original content. With foundation models supporting multiple languages and dialects, media and entertainment customers are increasingly leveraging generative AI to deliver higher quality localized content.

Amazon Bedrock Flows offers an intuitive, no-code visual builder and a set of APIs to seamlessly link state-of-the-art (SOTA) foundation models (such as Anthropic’s Claude and Amazon Nova), within Amazon Bedrock. It also integrates with other AWS services to automate user-defined generative AI workflows that go beyond submitting a prompt to a large language model.

Amazon Bedrock Prompt Management provides a streamlined interface to create, evaluate, version control, and share prompts. It helps developers and prompt engineers achieve the best responses from foundation models for their use cases.

We’ll demonstrate how you can take advantage of Amazon Bedrock features (such as Flows, Prompt Management and different foundation models) to quickly build and test a workflow. The workflow will take existing content, provide localized copy and deliver an evaluation on changes made for editorial review.

Scenario overview

As an online publisher that is planning to expand their readership to new geographies and channels without having to rely exclusively on net-new local content, you want to create a workflow that:

  1. Localizes the existing text content for a specific country and language to better align with local markets and advertising strategy.
  2. Adapts content for new, emerging channels (like short-form social media) using style guides currently loaded into Amazon Bedrock Knowledge Bases to help content editors check their work.
  3. Provides an evaluation on metrics (such as factual correctness, length, dialect and overall changes in meaning) so content editors can make an informed choice to publish or make further changes.
High-level architecture diagram outlining a content management system that contextualizes Amazon Bedrock Flows and Amazon Bedrock Knowledge Bases to localize content and provides an evaluation to assist content editors to publish new content.

Figure 1: High-level architecture.

Prerequisites

Before creating the flow and prompts, make sure you have the following setup:

Create the prompts

Before creating the flow, you need to create two prompts that will be used in the prompt flow later.

Our first prompt localizes the content, and we used Anthropic’s latest Claude 3.7 Sonnet foundation model, which supports languages such as English, French, Spanish, Portuguese, Japanese and others.

When creating the prompts, variables such as the text article are expressed as {{article}} or local language as {{language}} to allow the values to be referenced between different flow nodes, allowing for greater flexibility and efficiency.

A screenshot showing the content localization prompt details such as name, description, system instructions, user message, model configuration and hyperparameter settings. Dynamic values are handled using variables, written like {{variable}}. In the Overview area the Name is: ContentLocalization and the Description is: This prompt processes key components of a localization request, the article text, the target country and language and a RAG query for a style guide knowledge base to enhance the prompt output. There are no tags in the Tags area. In the Prompt draft area the system instructions are: You are an expert journalist. Specialized in localization of articles to different markets. The User Message is: Task: - You are an Expert translator in English to {{language}} translation. - Take {{article}} and localize it to {{language}} from the perspective of the native speaker in {{country}} according to guidelines set out in {{query}} - Adapt the content in {{language}} to any cultural or country-specific differences in terminology directly in-line. - Do not create or suggest a title - at the end of your response, provide an exhaustive list in English outlining the changes you have made. Do not summarize in {{language}}. - Ensure your response is of a similar length to {{article}} and there are not changes in facts described in the original article or major structure changes. - Avoid stating facts that may be affected by your cutover date. Such as currency conversion for unstable currencies. The prompt is writen as follows: Task: - You are an expert translator in English to {{language}} translation. - Take {{article}} and localize it to {{language}} from the perspective of a native speaker in {{country}} according to guidelines set out in {{query}} - Adapt the content in {{language}} to any cultural or country-specific differences in terminology directly in-line. - Do not create or suggest a title - At the end of your response, provide an exhaustive list in English outlining the changes you have made. Do not summarize in {{language}}. - Ensure your response is of a similar length to {{article}} and there are not changes in facts described in the original article or major structure changes. - Avoid stating facts that may be affected by your cutover date. Such as currency conversion for unstable currencies.

Figure 2: Content localization prompt.

The second prompt evaluates the original and localized content to provide an evaluation to the content editor. To confirm independence from the localization prompt, this prompt uses Amazon Nova Pro. Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed and cost for a wide range of tasks—supporting over 200 languages.

A screenshot showing the content evaluation prompt details such as name, description, system instructions, user message, model configuration and hyperparameter settings. Dynamic values are handled using variables, written like {{variable}}. The Overview section has the Name ContentEvaluation and Description: This prompt evaluates original and localized text while providing a numerical score across key metrics. There are no tags in the Tags area. The prompt is written as follows: You are an impartial judge, evaluate the {{article}} and {{local}} based on the following metrics: 1. Factual correctness 2. Length 3. Local dialect 4. Overall change in meaning and tone. Provide a decimal score between 0 and 1 and brief feedback for each metric. Do not provide evaluation on any other metric.

Figure 3: Content evaluation prompt.

Create and configure the flow

Using the Amazon Bedrock console, navigate to Flows under Builder Tools and create a new flow as shown in Figure 4.

A screenshot outlining the details of an Amazon Bedrock Flow named Content Localization. It provides details such as the name, description, creation date, the IAM role used and a unique flow ID.

Figure 4: Article localization flow.

Upon creation, the browser redirects to the flow builder, otherwise use Edit in flow builder to switch to the visual interface.

In the Flow Builder, there are different node types you can use to create your flow. You can drag and drop different nodes onto the canvas and create links between them after defining variables directly in each node, or by loading saved prompts.

For this demonstration the following flow was created (Figure 5).

A screenshot showing a completed Amazon Bedrock Flow with an input node connected to a Knowledge base and two Prompt nodes. Each Prompt node has a Flow output node connected. The nodes are connected with links so that different inputs and outputs can be processed by the nodes.

Figure 5: Content localization flow.

You can review specific settings for each node by selecting it in the Flow Builder.

  1. To simulate text content being sent from a content management system, the Flow input accepts a JSON object with the following attributes:
    1. article: text of the article selected for localization
    2. country: the target country for localization
    3. language: the target language for localization
    4. query: a text prompt to retrieve relevant style guides from Amazon Bedrock Knowledge Bases
  2. Using the Cohere Command R model, the Get_Style_Guides node parses the query text from the input and retrieves relevant results from the knowledge base. The output from this node augments how the localized text is generated.
  3. The Localization prompt node uses the previously created prompt to localize the input text.
  4. The Evaluation prompt node uses the previously created prompt to evaluate the input and localized text.
  5. The two Flow output nodes then stream the text output from each of the two prompt nodes as separate events.

Large language models (LLMs) can generate incorrect information due to hallucinations. Amazon Bedrock Flows integrate with Amazon Bedrock Guardrails to let you configure safeguards that identify, block or filter unwanted responses in your flow.

Test the flow

Using the Test flow pane in the Flow Builder, you can quickly test your complete workflow, with each flow output node streaming output in near real-time as the flow executes. To demonstrate how generative AI can work with Spanish across European and Latin American markets, the following examples use the same input text but changes the target country for localization.

Two side-by-side screenshots of the Test flow pane showing a JSON object containing an article about changing UK house prices, with an article attribute containing the text (for example UK house prices dipped by 0.5% from November to December - and so on), a country attribute for the target country (for example Spain), a language attribute for the target language (for example Spanish) and a query attribute to query the knowledge base containing relevant style guidelines that augments the localization prompt. The left side request is for Spain, in Spanish and the Right side is a request for Argentina in Spanish.

Figure 6: Sample input objects.

The following (Figure 7) is an example of localized text in Spanish. Note how the localization step provides detailed feedback on what changes were made and provides additional context for the specific geography.

Two side-by-side screenshots of the Test flow pane showing content localizations for an audience in Spain (on the left) and Argentina (on the right) respectively. The localized text is displayed first, followed by a numbered section outlining what changes have been made to the original content, citing specific phrases or words. It shows how the request for both to translate to the language of Spanish have differences based on the country requested.

Figure 7: Sample localized text.

In the final step, the evaluation uses a different foundation model to independently compare the original and localized content, as well as provide scored feedback to the content editor.

Two side-by-side screenshots of the Test flow pane showing evaluations of the localized content for an audience in Spain and Argentina respectively. A numerical score between 0 and 1 is given factual correctness, length, local dialect and overall change in meaning in tone. In both examples shown, there is also a brief explanation of the score given. For example: Factual Correctness: 1.0 - all facts from the original text have been accurately translated and retained.

Figure 8: Sample content evaluations.

Pricing

Amazon Bedrock Flows charges for every 1000 transitions required to execute your workflow. A transition occurs when data flows from one node to another (for example, an input node transitioning to a prompt node). There are no additional charges for using Amazon Bedrock Prompt Management.

Amazon Bedrock model usage charges will vary on the type of model used and the number of input and output tokens. It should be noted that this demonstration also uses Amazon Bedrock Knowledge Bases configured with an Amazon OpenSearch Serverless vector database and Amazon S3 data source.

Pricing will vary depending on the AWS Region used. All resources for our demonstration have been configured in the Oregon (us-west-2) Region. Reference Amazon Bedrock, Amazon S3 and Amazon OpenSearch Service pricing as needed.

Conclusion

We demonstrated how media and entertainment customers can configure an article localization workflow using a low code, serverless architecture. By using Amazon Bedrock Flows and Prompt Management, content owners and editors can leverage the benefits of generative AI to deliver more engaging content for new audiences.

Contact an AWS Representative to know how we can help accelerate your business.

Further reading

Benjamin Le

Benjamin Le

Benjamin Le is a Solutions Architect within the AWS Telco, Media, Entertainment, Gaming and Sports team, working with Publishers and Internet Service Providers. He helps customers accelerate business transformation using Generative AI solutions.

Emilio Garcia Montano

Emilio Garcia Montano

Emilio Garcia Montano is a Senior Solutions Architect working with Media & Entertainment customers, focused on Publishers in particular. He is passionate about helping customers to solve their industry challenges through innovative Generative AI.