Quantum Finance: Credit Risk Analysis with QAE

Alice Liu
12 min readMar 25, 2020

--

Applying the QAE algorithm to evaluate credit risk and comparing the efficiency with classical Monte Carlo simulations

Today’s Headlines:

Dow Jones dropped over 777 points as of September. The Lehman Brothers declare bankruptcy.

Uh oh. Not long after, the stock market crashed and came to the lowest point of history. On a matter of a month, you lose your job, house, all your savings. Everything you worked so hard for- gone. Just stripped away of everything you had and owned.

This was unfortunately the exact scenario for over 2.6 million Americans in the 2008 financial meltdown. A sick child loses health insurance after the parent becomes unemployed, a recent college graduate scans the few options in an empty job market, a homeowner prays for a government stimulus check that would prevent homelessness, a former business owner flipping through a Bible for words of hope. Young or old, big and small, these recessions affect everyone.

But what caused this event to happen? The sole answer: risk. More specifically, by not properly estimating and dealing with it. As seen in 2008, the stock market crash came from the insufficient estimation of risk to bonds within subprime mortgages. However, the consequences of not dealing with risk are actually seen every day, with unplanned risks bringing close to businesses and individual careers.

Financial Risk

Within each business venture and company, comes with both uncertainty and risk. Uncertainty is universal to everyone, and there isn’t really much you can do to prevent it. Risks, on the other hand, consist of choices made that have different outcomes for each individual, both good and bad. Risks are specific to only you while uncertainty applies for everyone.

For example, if you are with a friend on a cloudy day, the chances of raining are uncertain for you and your friend, which are independent of the actions of both of you. If you decide to go outside for the day without bringing an umbrella while your friend stays indoors, only you bear the risk of getting wet, ie. an undesirable outcome.

Example of uncertainty vs. risk: On a cloudy day, Danny the dog risks getting wet when he goes outside to play as opposed to his friend Daniella, who stays inside. Rain is uncertain to both of them, but Daniella avoids the risk by staying in her doghouse.

The same applies for the market when investing. Person A and Person B both hold a stock and the chances of that stock declining in price the next day is an uncertainty to both of them. But if Person A holds the stock and Person B sells it immediately, the risk only applies for Person A for losing money.

Credit Risk Analysis

Lending also carries the same type of thinking. When you lend your client a certain amount of money in the form of credit or mortgage, there is a chance that the borrower does not pay the loan back and defaults. Also known as credit risk, this is extremely harmful for the lender, creating an interruption of cash flows which increases the cost of collection and sometimes results in a huge loss.

So how do we deal with this to ensure that we aren’t “handing money for free” and losing profits? The answer is with credit risk analysis, which assesses the probability of the borrower’s repayment failure and loss caused when the loan obligations are not repaid.

There are 2 main components when evaluating the credit risk:

  1. Amount of the loss
  2. Probability of the default happening

Multiplying the components together = the expected loss

By just looking at it, well, the amount of the loss is pretty easy to calculate, and in terms of probability, if the borrower’s probability is low, give the loan, and if the borrower’s probability is high, don’t give the loan.

Simple right? No, not really. For one, there are many factors and parameters to consider when assessing a client’s profile, making it almost impossible to pinpoint exactly how likely that client is to default on their loan. It’s more about estimating the probability.

To attempt getting around this probability issue, financial practitioners usually charge a fee when granting a loan, so the higher the risk the higher the interest rate in order to try to increase the probability.

However, sometimes this causes the probability for the client defaulting to increase because of the huge amount of money they now owe (due to interest). If enough clients are doing this, it may cause a credit crunch, starting a recession.

Since this didn’t exactly work out, some practitioners resorted to credit risk modeling, using quantitative credit models to actually estimate the probability of default.

Different models have been created and used, including:

  • Credit Scoring Models — measures from fundamentals in financial reports
  • Structural Models — based on share price and volatility
  • Reduced Form Models — uses a general economic environment viewpoint
  • Credit Migration Models — actuarial techniques on how credit rating changes over time
  • Credit Portfolio Models — looks at the credit risk holistically and has the benefit of diversification

There are some limitations and problems to these models though, which distorts the probability.

They aren’t extremely effective due to:

  1. Lacking data — there either isn’t enough sufficient data to cover a specific time period, or the relevant information isn’t present
  2. Skew distributions are not considered — the mathematics and parameters used to calculate the probability are for normal distributions (which may not always be the case) when displaying the data
  3. Correlations are not acknowledged — there may be a domino effect where one default leads to more defaults especially during an economic crisis
Examples of different distributions- one problem is that most credit models assume the data falls within a normal distribution (2), while in reality, the data falls under a skew distribution (1, 3)

So we’ve looked at the ineffectiveness of interest payments and credit risk modeling, what now? The answer is utilizing machine learning methods to create new models with a greater variety of data and accuracy, also known as the Monte Carlo method.

Monte Carlo Simulations

Monte Carlo simulations, unlike other methods described, outputs a range of all possible outcomes, assessing the impacts of risk and the probabilities they will occur for any choice of action.

It allows for better decision-making, by displaying the extreme possibilities, (including outcomes for making conservative and risky decisions), with all the different outcomes displayed under certainty.

The Monte Carlo performs the actual risk analysis by providing a probability distribution (substituting a range of values to build the model of possible results) in order to make the models of possible results.

It then repeatedly calculates the results, using a different set of random values from the probability function with each round, typically involving thousands of recalculations before it produces the final distributions of possible outcome values.

Different examples of probability distributions the simulation considers, common ones including lognormal, uniform and triangular

This makes the process much more efficient as

  • All combinations of input parameters are tested and the values for each parameter cover the full range for more sufficient analysis
  • It’s able to run tens of thousands of different scenarios, then automatically collecting and summarizing the results with all the details

The results after the processing is finished include

  • Probabilistic Results — the likeliness of each outcome
  • Sensitivity Analysis — which inputs had the greatest effect on the results
  • Scenario Analysis — shows the combinations of values and different inputs to create certain outcomes
  • Correlations — models interdependent relationships between input variables

This all sounds great, but why aren’t they being widely used today? The reason is because of time and computational power. With a few input parameters, the values of the parameters can cover a wide range, which in turn creates a huge number of different scenarios, which would be difficult to process on today’s computers.

For example, stock market simulations that use these methods are routinely day-long. If there are just 10 different inputs and 10 different values, there would be 10¹⁰ (10 billion) different scenarios.

How are we supposed to model billions of different scenarios? That would probably take months, assuming that your computer doesn’t break down in the process.

What if we could speed up this process while not worrying about the amount of computational power? This is where quantum computing (and how they apply to financial modeling) comes in.

Quantum Finance

In quantum mechanics, the basic units include qubits, encoding classical bits 0 and 1 into “quantum” bits of |0⟩ and |1⟩. Qubits can also be in a superposition of states with both |0⟩ and |1⟩, simultaneously be in all of the system’s states.

This property allows quantum computers to perform these parallel computations on a massive scale. With this, they have the potential to greatly reduce the time and cost to perform the long and complex calculations used for financial applications, including analyzing risk and calculating credit score.

Within finance, there are a few instances which are tricky to detect, including assets in an optimum portfolio and estimating risk and return. Current methods within quantum computing, including optimization models, quantum machine learning methods and Monte Carlo with QAE, are currently being developed to efficiently and accurately speed up this process.

Quantum Amplitude Estimation (QAE)

The Quantum Amplitude Estimation (QAE) algorithm is used to sample probabilistic distributions quadratically faster than classical methods, efficiently estimating the estimation values in Monte Carlo simulations.

The QAE uses a series of Quantum Amplitude Amplification (QAA) operations and Shor’s Quantum Fourier Transform (QFT) in order to measure the amplitude of any given state.

This means that Monte Carlo simulations applied with the QAE algorithm allow a quadratic accuracy while obtaining the same accuracy to determine risk.

Within risk analysis, the Value at Risk (VaR) and Conditional Value at Risk (CVar) are used to mathematically quantify the present risk. The VaR measures the distribution losses of a portfolio while the CVar is the expected loss when the VaR breakpoint is reached.

Both methods are estimated with Monte Carlo sampling in the relevant probability distribution, determined with accuracy and with a quadratic speedup (compared to classical methods) when applying the QAE algorithm.

Calculating risk measures in a two-asset portfolio with QAE (project)

For my project, I computed the risk measures of a simple two-asset portfolio by conducting simulations on IBM’s Qiskit. I applied the QAE algorithm over the Monte Carlo method to estimate the expected loss and the cumulative distribution function (CDF) to efficiently measure value at risk and conditional value at risk.

  1. The first step us by importing all the necessary packages and models.
  2. Set the parameters — When we analyze the credit risk of a portfolio of k assets, the default probability of every asset k follows a Gaussian Conditional Independence model is as follows:

This looks extremely complicated so let’s break it down.

Following a standard normal distribution, z is a value sampled from Z, a latent random variable. The other variables are defined as:

Based on this model, we define the parameters. This includes the number of qubits used for Z, truncation value for Z (Zmax), base default probabilities for every asset, the sensitivities of the default probabilities, loss given default for k, and the confidence level for VaR and CVar.

3. The next step is to construct a circuit loading the uncertainty model by

  • constructing the circuit factory
  • defining the number of qubits to represent the model
  • initializing the quantum register and circuit

With this, we are able to compute the actual (exact) values for:

  • The expected loss
  • PDF (Probability Density Function) and CDF (Cumulative Distribution Function). These are used to give a complete description of the probability distribution of a random variable of the loss
  • Value at Risk (VaR) and Conditional Value at Risk (CVaR) for the loss

The values are then outputted:

The loss distribution with the expected loss, value at risk and conditional value of risk for the loss is generated. The expected loss is the value 0.64 while the VaR is 2 and CVaR is 3.

Plotted results for the loss PDF, expected loss, VaR, and CVaR

The results for Z and its probabilities for each value as well as the individual default probabilities are also shown.

Plotted results for Z
Plotted results for the individual default probabilities

4. We then estimate the expected loss and run the amplitude estimation.

This is done by determining the number of qubits to represent the total loss, creating a circuit factory, defining our linear objective function and our overall multivariate problem.

The quantum circuit is validated representing the objective function by simulating and analyzing the probability of the objective qubit in the state the QAE will eventually approximate in the state of |1>.

The exact and estimated values of the expected loss is generated, as well as the probability.

5. The Cumulative Distribution Function (CDF) of the loss is now estimated.

This is the probability of measuring |1> in the objective qubit, where QAE can be directly used to estimate it. By setting the appropriate input values (2 is used in this case) for estimation, obtaining the operator, getting the number of qubits and finally constructing the circuit, the exact and estimated values as well as the probability are generated.

6. The Value at Risk is then estimated. This is done with a bisection search and the QAE to evaluate the CDF in order to do the estimation. The steps are:

  • Defining the variables, including the amplitude estimation for the CDF and bisection search
  • Checking the values (low or high) and evaluating them
  • Checking if the low value satisfies the condition and if the high value is above the target
  • Performing the bisection search and return the high value after it completes
  • Running the bisection search to then determine the Value at Risk

The estimated and exact value at risk as well as the estimated and exact probabilities are generated.

6) Last step (finally!) Now it’s time to compute the last feature, the Conditional Value at Risk. This is the expected value of the loss conditional to being equal or larger than the value at risk.

The general steps are to:

  1. Define the linear objective
  2. Subtract the VaR
  3. Evaluate the state vector result and normalize
  4. Add the VaR to estimate
  5. Run the QAE to estimate the cVaR

The exact estimated and exact values of the cVaR are generated, as well as the graph to represent the values.

With the QAE, the CDF of the loss was analyzed significantly faster than using classical methods, which either evaluates all the possible combinations of defaulting assets or requires many samples in the Monte Carlo simulation (both of which require great time and effort).

Results

With the QAE algorithm implemented, a quadratic speedup was applied (over the classical Monte Carlo simulation) when estimating the expected loss and CDF to be later used for evaluating the value at risk and conditional value at risk.

For each part, the estimated and exact values were similar (but still varied to a certain degree), the expected loss after applying the QAE outputting an estimated value of 0.75 while the exact value was 0.64.

With the value at risk, the estimated probability was 96.2%, slightly higher than the exact probability of 95.9%. The conditional value at risk created similar patterns, the estimated probability of 3.86 slightly higher than the exact value of 3.0

Implications

The era of quantum computing and the development of new quantum algorithms is still relatively new, but is rapidly expanding the fastest that it’s ever been. In order to achieve true quantum advantage within a real-world scenario, the hardware needs to significantly be improved by increasing the number of qubits to reduce the amount of errors.

It is important to note that this field with quantum computing is still developing, and this was an example to test the potential of quantum computers from today’s resources.

The project results in this article is to show the potential that quantum optimization and the algorithms within have based on today’s resources,(imagine what this would look like in 20 years based on the rate we’re developing in!)

However, this technology has huge implications for the area of finance, using the power of computing to reduce risk, diversify portfolios, the pricing of calls, and expected values for fixed incomes. Quantum computers as new information processing machines will shape the future of finance.

By balancing out the risks from accurate and reliable analyses, we, as the smallest individual or the biggest corporation, can make better investment decisions to further protect the economy from future financial crises.

If you liked this article, add a clap and stay in tuned for more articles coming soon! Reach out to me at aliceliu2004@gmail.com or on LinkedIn

--

--