AWS for M&E Blog

Enabling publishers to customize content while maintaining editorial oversight with Amazon Bedrock

Reader engagement is key for any publisher and directly correlates to revenue growth. One of the best mechanisms to increase reader engagement is personalization. Prior to the mainstream availability of generative AI foundation models (FMs), personalization was, and still is, achieved by understanding patterns in reader behaviour and surfacing content relevant to them.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthrophic, Cohere, Meta, Stability AI, and Amazon with a single API, along with a broad set of capabilities you need to build generative AI applications.

With the advent of generative AI, it is possible to hyper-personalize content such that you can surface dynamically generated content based on reader behaviour, preferences, or context. With the help of Large Language Models (LLMs), such as those available through Amazon Bedrock, it is possible to generate or modify content to make it more relevant and engaging to readers and therefore drive more views and higher consumption. Headline generation is a great use case for this.

While it is possible to use generative AI to adopt a hyper-personalized approach where every reader could get a personalized headline, most publishers want to retain editorial control and oversight of content prior to publication. This blog post demonstrates how to use Amazon Bedrock to personalize article headlines for different customer segments while maintaining editorial oversight.

Personalized headline generation

Text summarization is a very popular use case for LLMs and is a capability you can leverage to generate a headline based on the context and content of a given written article. For the purpose of this blog post, you can use Amazon Bedrock to generate fictitious content for which you can later generate a headline. Following is a sample article generated with Amazon Bedrock for use as a demonstration.

Atlantis Budget 2024: Boosting the Economy through Investments in Infrastructure, Healthcare, Education and Tax Raises

Atlantis City – The much-anticipated 2024 national budget was unveiled today by John Smith, Minister of Economy, in an impassioned speech to parliament. Spanning over 2 hours, Minister Smith described the budget as “a comprehensive economic plan to rebuild Atlantis into a modern, prosperous nation.”

A key focus of the budget is a massive $45 billion investment in infrastructure over 5 years to upgrade aging and inefficient transportation systems, energy grids, water systems and digital networks. An Infrastructure Investment Fund will provide financing for high-priority projects that improve productivity and connectivity between Atlantis’s regions. Several major infrastructure projects have already been approved, including a new international airport in the capital Atlantis City, a high-speed rail line along the east coast, and new solar power plants in the Southern provinces.

Healthcare is another big winner, with spending increasing 6% next year. The budget aims to reduce wait times for surgeries, hire more healthcare workers, expand telemedicine to rural areas, and make prescription drugs more affordable. A new universal dental care program for low-income families is also introduced.

For middle class Atlantis citizens, tax will bite by increasing the basic personal exemption, individuals will face an extra $500 of income taxed. This change is projected to cost the average household $750 per year.

Education sees a 4% funding increase, with money going towards hiring more teachers, expanding vocational and early childhood education. Early Education will be available from the age of 1 instead of the current age of 3. A major change for working parents. The budget also increases Pension a 10% in line with inflation.

The Minister stated this will “make Atlantis a more attractive place for companies to set up their headquarters and create new high-paying jobs.”

The 2024 Atlantis budget has been praised by many financial experts as a prudent plan to develop the economy. However, opposition parties argue it does not do enough to reduce child poverty and food insecurity. Lively debate will continue as the budget bill now heads to parliament for final approval.

Generating your headlines with Amazon Bedrock

You can start with pre-existing code from our Amazon Bedrock Workshop to make things easier.

Clone the repository:

git clone https://github.com/aws-samples/amazon-bedrock-workshop
cd amazon-bedrock-workshop
git checkout 6c6997b042b8d493b74d2e8ece3eff4dce96a95c

You can find the relevant code snippets in amazon-bedrock-workshop/01_Generation/02_contextual_generation.ipynb

You can now take and modify the code for your own purposes.

The next step is to import modules and set up the boto3 client for Amazon Bedrock.

import json
import os
import sys

import boto3

module_path = ".."
sys.path.append(os.path.abspath(module_path))
from utils import bedrock, print_ww


boto3_bedrock = bedrock.get_bedrock_client(
assumed_role=os.environ.get("BEDROCK_ASSUME_ROLE", None),
region=os.environ.get("AWS_DEFAULT_REGION", None)
)

Once you have the client using langchain, instantiate the model with the right parameters.

from langchain.llms.bedrock import Bedrock

inference_modifier = {'max_tokens_to_sample':4096,
"temperature":0.5,
"top_k":250,
"top_p":1,
"stop_sequences": ["\n\nHuman"]
}

textgen_llm = Bedrock(model_id = "anthropic.claude-v2",
client = boto3_bedrock,
model_kwargs = inference_modifier
)

Next, it is time to write the prompt. A prompt is the text input that is provided to an LLM to generate a response. Prompts provide context and guide the AI’s response by specifying the topic, style, point of view, etc. that the requester wants the AI to adopt in formulating its reply. Prompts allow you to steer the output and direct the AI as needed.

A good prompt is often composed of the following:

  • Instruction: Task description or instruction on how the model should perform
  • Context: Additional / external information to steer model performance
  • User Input: The input/question that the model needs to in order to provide output

You will generate prompts for different reader segments previously identified in a reader base. Another requirement of publishers is to align their content to a particular house style. Publishers often have multiple publications and will share content across them. Naturally, the different house style applies to headlines, so it is important to incorporate it in the context.

Generating content with Amazon Bedrock is very simple. You can do this by consuming the API directly. You will use a technique called dynamic prompting. This is applied by dynamically updating the prompt on every request, depending on the parameters.

from langchain.prompts import PromptTemplate

# Create a prompt template
multi_var_prompt = PromptTemplate(
input_variables=["customerSegment", "editorialStyle", "contentFromArticle"],
template="""
Act as a AI Editor assistant by generating a fully personalized headline for the stories provided to you. Make the headline based the content from the story. Consider the context below to write the headline as personalized as possible so that it catches the reader attention. Focus on extracting the information from the story that is most personally relevant for the reader even if you have to leave other information out. Write 3 different headlines so human editor can choose what would be more suitable.
Context: The reader is a {customerSegment} and the editorial style is {editorialStyle}
User: Write a good headline that matches the following story:
<article_content>
{contentFromArticle}
</article_content>

Assistant:"""
)

Next, invoke Amazon Bedrock with the right parameters and content. In the following example, you want a personalized headline for a segment (“Pensioner”) and editorial style (“News”).

# Instanciate variables
prompt = multi_var_prompt.format(customerSegment="Pensioner",
editorialStyle="News",
contentFromArticle="""
My article content ...
...
...
"""
)

response = textgen_llm(prompt)

headline = response[response.index('\n')+1:]
print_ww(headline)

One of the headlines generated is: “2024 Budget Brings Welcome 10% Pension Rise Alongside Infrastructure and Healthcare Investments”, which is consistent with the content and style provided.

In the following table, you can see a selection of different results.

Editorial flow oversight

In the following diagram, it is possible to visualize a proposed editorial flow integrated with Amazon Bedrock generative AI capabilities within a pre-existing CMS.

Diagram of a proposed editorial flow integrated with Amazon Bedrock generative AI capabilities within a pre-existing CMS.

  1. An Article in the CMS is marked as ready for editorial review.
  2. This invokes an internal CMS event that connects to the custom headline API powered by AWS Lambda and Amazon Bedrock. This returns with a collection of personalized headlines per reader segment and house style.
  3. The editorial team can now review personalized headlines and decide to change them if so desired.
  4. Finally, editorial approval can be granted and the content and the personalized headlines will be published.

Conclusion

This blog post details how Amazon Bedrock can help publishers increase customer engagement by assisting in generating personalized headlines at a scale not possible using traditional means. With the use of generative AI, it is possible to increase productivity while maintaining stable costs. While headlines drafts are generated automatically, editors and journalists still have the last word as to what, when, and to which customer segment they get published, ensuring the highest standards of journalism are followed.

Emilio Garcia Montano

Emilio Garcia Montano

Emilio Garcia Montano is a Solutions Architect working with Media & Entertainment customers and Publishers in particular. He is passionate about helping customers to solve their industry challenges through the usage of Generative AI.

Mark Watkins

Mark Watkins

Mark Watkins is a Principal Solutions Architect within the Telco, Media & Entertainment, Games, and Sports team, specialising on Publishing solutions using AI. Away from professional life, he loves spending time with his family and watching his two little ones growing up.

Willy Joseph

Willy Joseph

Willy Joseph is a Sr. Mgr. of Solutions Architecture with AWS for Media & Entertainment, Games, and Sports. He leads the AMER Solutions Architect Specialist team that are delighting customers across the Media & Entertainment, Games, and Sports industries.