AWS Quantum Technologies Blog
Citi and Classiq advance quantum solutions for portfolio optimization using Amazon Braket
This post was contributed by Yoram Avidan, Iftach Yakar, and Zachar Borsutsky from Citi, Adam Goldfeld and Tamuz Danzig from Classiq, and Perminder Singh from AWS.
Citi Innovation Labs, a group tasked with identifying and harnessing emerging technology for Citi, partnered with Classiq on this project for their expertise and support. Classiq’s stack allows modeling at a higher level of abstraction tailored for quantum software development, bridging the gap between the quantum information and financial worlds, offering access to subject-matter experts and tools that make it easy to design quantum algorithms. Classiq uses Amazon Braket to access on-demand simulators and quantum processing units (QPUs).
In this post, we’ll describe our results using quantum computers to study portfolio optimization with the Israel-based startup, Classiq.
Background
Portfolio optimization is the process of selecting the optimal mix of assets, such as stocks, bonds, and other financial instruments, to achieve the highest possible returns for a given level of risk. However, in the current stage of quantum computing technology, known as the Noisy Intermediate-Scale Quantum (NISQ) era, quantum computers face limitations due to noise and qubit count. This constrains the capacity of quantum applications. While it remains an open question whether some NISQ algorithm provides an advantage over classical methods, variational quantum algorithms are considered a prime candidate for finding such a speedup – in particular, Quantum Approximate Optimization Algortihm (QAOA).
The focus of our study was employing the QAOA quantum algorithm for portfolio optimization, investigating how adjustments to the algorithm’s penalty factor (when introducing constraints to the problem) impacted the algorithm’s performance.
If the QAOA algorithm ultimately proves to have a comparative advantage over classical methods, the fine-tuning strategies discussed in this exploration could pave the way for improved results in portfolio optimization and potentially other complex financial challenges that Citi faces.
Modeling the portfolio optimization problem
Consider a case where you have two assets; which one would you choose from:
- Asset A: high risk with a high return.
- Asset B: low risk with a low return.
The adventurous investor would choose the riskier option, while a risk-averse investor would take the safer option. However, for most investors, the optimal choice is somewhere in between. By selecting a combination of these two assets, an investor can control the balance between risk and return.
Portfolio optimization can be mathematically modeled as the portfolio of assets (ω) that minimizes the following function:
Where:
- ω is a vector, representing the distribution of a portfolio of assets, whose weights signify the amount of each asset selected. It can contain discrete values (e.g., 1, 23, 512) for assets that can be purchased or sold only in fixed amounts, or continuous values (e.g., 2.5, 33.34, 86.912) when considering the percentage each asset represents out of the entire portfolio.
- R is a vector, representing the expected return of each asset. It is the anticipated gain or loss that an investor can expect to receive from an investment over a specific period.
- 𝛴 is the covariance matrix of the returns, where 𝛴ij = cov(i,j) for assets i, j. It measures how different assets are mutually correlated (for assets i ≠ j), and how volatile an asset is (for assets i = j). Investing in strongly correlated assets increases the overall portfolio’s risk because two correlated assets are likely to both lose value if either one asset loses value.
- 𝝀 ≥ 0 is the “risk appetite” factor, where 𝝀 = 0 results in a portfolio with minimal risk and 𝝀 → ∞ results in a portfolio with maximum return without regard to risk. This is a parameter we can tweak.
- ½ ωT 𝛴ω is the variance or the risk of the entire portfolio.
- RTω is the expected return of the entire portfolio.
While this model has some simplistic assumptions, such as relying on only two parameters – the expected return and the risk variance – to predict future asset behavior, it is still useful in financial planning. The model can be extended to better capture the asset’s financial behavior by imposing constraints on the portfolio to fit regulation criteria, minimize exposure to specific sectors, and more. For example, we later demonstrate how to include a constraint on the maximum number of stocks with regard to a defined budget (μ).
From a computational point of view, the portfolio optimization problem is NP-hard for minimization over binary or integer domains with additional constraints. In simpler words, the runtime to solve this problem grows exponentially as we increase the number of assets (the size of ω). Therefore, finding ways of improving the runtime and the quality of the results is important when the number of assets is large.
Quantum computation for optimization problems
Quantum computers exploit properties of quantum mechanics which are not accessible to classical computers. Quantum computers use qubits (quantum bits) in superposition, allowing them to process a vast number of possibilities simultaneously. Quantum computers can create non-local correlation (entanglement), where the state of one qubit is directly tied to the state of another, regardless of their distance. Interference is another key phenomenon in quantum mechanics that enables quantum computers to manipulate the amplitudes of quantum states, changing the probabilities of measuring different outcomes.
These capabilities give rise to quantum algorithms which solve certain problems much faster than known classical algorithms. A notable example is Shor’s algorithm, which solves the integer factorization problem in polynomial time on a quantum computer, where the fastest known classical algorithm takes exponential time.
Developing new algorithms that use quantum computers to solve hard problems of relevance to industry, like portfolio optimization, is an active topic of research. Among these algorithms are quantum linear equation solvers like Harrow-Hassidim-Lloyd (HHL), which can also be used for portfolio optimization, and variational quantum algorithms like QAOA.
QAOA is a hybrid algorithm, which means it combines both classical and quantum computation. The algorithm searches through all potential solutions using a mixer layer (which shuffles between alternative solutions) and a cost layer (which favors good ones). The circuit parameters are fine-tuned classically in an iterative manner, converging toward their optimal values with each run of QAOA.
A bit of quantum theory: QAOA
In this section we dive deeper into the aspects of QAOA relevant to our exploration of the portfolio optimization problem. Readers interested in learning more about QAOA can start with this paper on the topic from Cornell University.
QAOA is a very flexible algorithm that can solve optimization problems with many constraints. There are two main options for incorporating constraints into the algorithm. The first is by modifying the mixer layer to only include valid solutions, thus limiting the optimization search space. The second option is to modify the cost layer (indicated in Figure 1 by HC) by adding a penalty factor to the function we are trying to optimize so that invalid solutions are penalized and are less likely to be picked.
We investigate the second option because it produces considerably shallower circuits with fewer gates than the first, making it more relevant to near-term noisy quantum computers. As a result, we want to know how to implement a penalty factor that will increase the likelihood of selecting an optimal portfolio of assets.
Here we explore two kinds of constraints: equality and inequality; an equality constraint can be the need to spend the full budget, and an inequality constraint represents the need to spend some of the budget but not all of it. For the equality constraint, we introduce a defined budget (μ) for the purchase of assets for our portfolio (ω), and require the entire budget to be used: 𝛴iωi = μ. Mathematically, to satisfy this, we add a term to our portfolio optimization problem and attach a penalty factor coefficient (𝜎) as follows:
For the inequality constraint, instead of requiring a certain amount to be spent we set an upper limit to the amount that can be spent on assets: 𝛴iωi ≤ μ. To satisfy this, we need to use slack variables, noted by s. Mathematically, the objective function looks as follows:
The slack variables s eliminate the penalty term factor for solutions where our inequality 𝛴iωi ≤ μ is true.
Adding slack variables increases the number of qubits required for the calculation, and the number of operational steps, otherwise known as circuit depth, for the calculation. However, this method still gives shallower circuits than the alternative method for adding constraints that produces only valid solutions.
Building shallow circuits is very important as quantum computers are currently subject to errors that accumulate with the depth of the quantum circuit.
Modeling portfolio optimization with Classiq on Amazon Braket
The Classiq Software Development Kit (SDK) allows users to model quantum algorithms without having to navigate the low-level details like qubit assignment and quantum logic operations, significantly simplifying the process of designing and applying quantum algorithms to solve complex problems.
Collecting historical stock data
We started by collecting historical stock price data for several assets, including Apple, Walmart, Tesla, GE, Amazon, and Deutsche Bank using the Yahoo Finance API (yfinance). Using this data, we calculate the expected returns and the covariance matrix, setting the stage for optimized portfolio construction. Note that we’re calculating the return using logarithmic scaling as it is a common approach to calculating a rate of change:
import pandas as pd
import yfinance as yf
def download_data(
stocks: List[str], start_date: str, end_date: str,
period_step: int
) -> pd.DataFrame:
stock_data = {}
for stock in stocks:
ticker = yf.Ticker(stock)
stock_data[stock] = ticker.history(start=start_date,
end=end_date)[“Close”][::period_step]
return pd.DataFrame(stock_data)
def calculate_return(data: pd.DataFrame) -> pd.DataFrame:
log_return = np.log(data / data.shift(1))
return log_return[1:]
stocks = ["AAPL", "WMT", "TSLA", "GE","AMZN", "DB"]
dataset = download_data(stocks = stocks,
start_date = "2010-07-01",
end_date = "2017-07-01",
calc_period = 5) # weekly changes
log_period_returns = calculate_return(dataset)
returns = log_period_returns.mean().to_numpy()
covariances = log_period_returns.cov().to_numpy()
Modeling QAOA with Classiq
Classiq’s version of QAOA is based on the S. Hadfield et al. paper on the Quantum Alternate Operator Ansatz algorithm. This method offers an easy way of writing the optimization problem, configuring circuit layout and mixer type, selecting the classical optimizer, and executing on any simulator or QPU.
We begin modeling the QAOA problem by first defining the optimization problem using the Python package Pyomo:
import numpy as np
import pyomo.core as pyo
def portfolio_optimization(
covariances: np.ndarray, returns: np.ndarray, budget: int, specific_budget: int, λ: float, equality: bool
) -> pyo.ConcreteModel:
model = pyo.ConcreteModel()
num_assets = len(returns)
# define ω a vector of size num_assets, over the integer domain
model.w = pyo.Var(range(num_assets),
domain=pyo.NonNegativeIntegers,
bounds=(0, specific_budget))
w_array = model.w.values()
# setting the constraint
If equality:
model.budget_rule = pyo.Constraint(expr=(sum(w_array) == budget))
else:
model.budget_rule = pyo.Constraint(expr=(sum(w_array) <= budget))
# setting the expected profit and risk
model.profit = returns @ w_array
model.risk = w_array @ covariances @ w_array
# setting the minimization objective
model.cost = pyo.Objective(expr=model.risk - λ * model.profit,
sense=pyo.minimize)
return model
Notice that we defined the problem on the non-negative integer domain and added a budget constraint using pyo.Constraint
, which is automatically mapped to the objective function as described in the preceding code.
Next, we configured the number of layers of the QAOA circuit (p) to 5, and set a parameter for the value of the penalty, that we will modify between the different execution runs:
from Classiq.applications.combinatorial_optimization import (
OptimizerConfig,
QAOAConfig,
)
from Classiq import (
construct_combinatorial_optimization_model,
set_execution_prefenreces, show, synthesize, execute
)
from Classiq.execution import (
ExecutionPreferences,
AwsBackendPreferences,
)
qaoa_preferences = QAOAConfig(num_layers=5,
penalty_energy=”THE_PENALTY_VALUE”)
For the classical optimization part of the QAOA algorithm (OptimizerConfig
) we chose the gradient-free COBYLA
optimizer, setting the maximum number of classical iterations and the alpha-parameter for running CVaR-QAOA, an improved variation of the QAOA algorithm:
optimizer_preferences = OptimizerConfig(
max_iteration=130, alpha_cvar=1.0, name="COBYLA"
)
Synthesized circuit and execution
It’s important to note that the size of the synthesized circuits is directly related to the value assigned to the budget limit, the number of assets analyzed, and the range of asset-units that can be included in the optimized portfolio. For instance, assigning a budget of 10,000 shares, and analyzing the return and risk of the stocks of 100 companies, allowing for an investment of [0..,1000] shares in each would require more qubits and quantum logic gates.
To keep the resulting circuits to a size that can be easily simulated using today’s quantum simulators, we set the budget to 12 shares and decided to focus on 6 different stocks, allowing up to 7 shares for each, defined by the integer ωi ∈ [0..7] so that each asset takes 3 qubits (23= 8). Finally, we set the risk (𝝀) to 50, which represents a high risk-appetite. Note that we do all calculations as if shares of different companies have the same value. Adjustments to different share values are straightforward and can be made in a separate pre-processing step.
With this setup, the quantum circuit generated for the equality constraint consisted of 18 qubits and a depth of 554, while the inequality constraint circuit consisted of 22 qubits and a depth of 656 due to the additional slack variables.
Today, you can run these circuits on a variety of managed quantum simulators and quantum processing units on Amazon Braket, like the 79-qubit Aspen-M device from Rigetti, and the 25-qubit Aria QPU from IonQ. To learn more about the available devices on Amazon Braket, refer to the documentation.
Here we run this circuit on the on-demand state-vector simulator, SV1, capable of simulating up to 34 qubits without noise. Braket’s user experience is designed to enable customers to switch between simulators and real quantum hardware with just a single line of code. In Classiq, this means that to change from a simulator to running on real hardware, we only need to change one parameter (backend_name
).
We determine the number of times we measure the quantum circuit for each classical iteration according to the statistical variance between different shots (num_shots
).
aws_preferences = AwsBackendPreferences(backend_name="SV1",
aws_role_arn="OUR_AWS_ROLE",
s3_bucket_name="OUR_BUCKET_NAME",
s3_bucket_key="DEFINE_FOLDER",
)
backend_preferences = ExecutionPreferences(
backend_preferences = aws_preferences, num_shots = 3000
)
Finally, we create the quantum program and execute it:
portfolio_model =portfolio_optimization(covariance, returns, budget)
# create combinatorial quantum model
qmodel = construct_combinatorial_optimization_model(
pyo_model=portfolio_model,
qaoa_config=qaoa_preferences,
optimizer_config=optimizer_preferences,
)
qmodel = set_execution_prefenreces(qmodel, backend_preferences)
# create quantum program using the synthesis engine
qprog = synthesize(qmodel)
# visualize the quantum circuit
show(qprog)
# execute the circuit
res = execute(qprog)
To see the best 10 results:
import pandas as pd
optimization_result = pd.DataFrame.from_records(res[0].value)
optimization_result.sort_values(by=“cost”, ascending=True).head(10)
Analyzing the results
We analyzed the impact on the probability of finding valid solutions when making small changes to the penalty factor while executing both the inequality and equality constraints’ QAOA circuits.
Sampling a valid solution
Our first focus was to sample a valid solution that satisfies the constraints for the inequality and equality constraints as explained earlier. In the graphs that follow, we see the probability of obtaining such a solution.
Without any penalty, the graphs show zero probability for both constraints. However, as the penalty is increased, we notice a variation in behavior between the types of constraints. For the inequality constraint, the probability consistently grows with the penalty factor. In contrast, for the equality constraint, there’s a discernible peak or optimal penalty value. Beyond this peak, as we increase the penalty, the likelihood of getting a valid solution diminishes.
In Figure 3, we show that the probability for the inequality constraint is higher because the number of possible solutions (i.e., those which use part of the total budget rather than all of it) is greater.
Sampling the best 1% of solutions
Next, we sampled the top 1% of all possible valid solutions. First, we assessed all the sampled valid solutions from all our runs based on their objective function, identifying the best 1%. Then we re-analyzed our execution results, looking for penalty factor values that would yield the highest probability to come across a solution in the top 1% set of valid solutions previously identified.
When examining the graph (Figure 4) for this sampling, a similar pattern emerges for both constraints. A minimal penalty factor yields the best results. However, as we ramp up the penalty, the chance of obtaining these top solutions decreases.
We discovered that the quality of the results is sensitive to the magnitude of the penalty factor, which has a great impact on the quality of the algorithm. For both types of constraints, the probability to find the best 1% of valid solutions peaks when the penalty is relatively small, while increasing the penalty beyond a certain value reduces the probability of getting a good solution.
We want to highlight that fine-tuning the penalty factor is just one facet of the broader challenge. Near-term quantum algorithms are usually heuristic in nature. This means they demand considerable experimentation, involving trial and error, to optimize various parameters and settings. This iterative process is crucial to harnessing the full potential of quantum computing in NISQ devices.
Classiq, in conjunction with Amazon Braket, is designed to make this level of in-depth experimentation and research easier. The one feature that is designed specifically for building, testing, and running variational algorithms like QAOA is Amazon Braket Hybrid Jobs. This feature gives priority access to quantum hardware, minimizing wait times in queues, and provides the convenience of submitting code in a fire-and-forget manner, so we can focus on algorithm development and let Amazon Braket handle the underlying infrastructure.
Conclusion
At Citi Innovation Labs, we see the potential of quantum computing in benefitting finance and are engaging with leaders in the industry like Classiq to understand how quantum computing can impact business problems such as portfolio optimization. We demonstrated how Classiq running on Amazon Braket could be used to study this complex problem, taking advantage of Classiq’s ease-of-use, combined with the on-demand, pay-as-you-go access to quantum hardware provided by Braket.
If you’re interested in quantum portfolio optimization or running variational hybrid algorithms, you can kickstart your journey today using Classiq. To learn more about how Classiq host on Amazon Braket, check out this blog post. To get started using Amazon Braket Hybrid Jobs to run variational algorithms like QAOA on a variety of managed simulators and QPUs, check out our Hybrid Jobs page.
The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this blog.