Press "Enter" to skip to content

I like to MVO it!

[This article was first published on R on OSM, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

In our last post, we ran through a bunch of weighting scenarios using our returns simulation. This resulted in three million portfolios comprised in part, or total, of four assets: stocks, bonds, gold, and real estate. These simulations relaxed the allocation constraints to allow us to exclude assets, yielding a wider range of return and risk results, while lowering the likelihood of achieving our risk and return targets. We bucketed the portfolios to simplify the analysis around the risk-return trade off. We then calculated the median returns and risk for each bucket and found that some buckets achieved Sharpe ratios close to or better than that implied by our original risk-return constraint. Cutting the data further, we calculated the average weights for the better Sharpe ratio portfolios. The result: relatively equal-weighting tended to produce a better risk-reward outcome than significant overweighting.

At the end of the post we noted that we could have a bypassed much of this data wrangling and simply calculated the optimal portfolio weights for various risk profiles using mean-variance optimization. That is what we plan to do today.

The madness behind all this data wrangling was to identify the best return afforded by a given level of risk. Mean-variance optimization (MVO) solves that problem more elegantly than our “hacky” methods. It uses quadratic programming to minimize the portfolio variance by altering the weights of the various assets in the portfolio. It is subject to the constraints (in the simplest form) that the return of any particular portfolio is at least equal to the expected return of the portfolio and the weights of the assets sum to one.

More formally it can be expressed as follows:

Minimize: \(\frac{1}{2}w’\sum\ w\)
Subject to: \(r’w = \mu\ and\ e’w = 1\)

Here \(w\) = asset weights, \(\sum\) = the covariance matrix of the assets with themselves and every other asset, \(r\) = returns of the assets, \(\mu\) = expected return of the portfolio, \(e’\) = a vector of ones. It is understood that one is employing matrix notation, so the \(w’\) is the transpose of \(w\).

If you understand that, it’s probably the roughest rendition of MVO you’ve seen and if you don’t, don’t worry about it. The point is through some nifty math, you can solve for the precise weights so that every portfolio that falls along a line has the lowest volatility for a given level of return or the highest return for a given level of volatility. This line is called the efficient frontier since efficiency in econospeak means every asset is optimally allocated and frontier, well you get that one we hope.

What does this look like in practice? Let’s bring back our original portfolio, run the simulations, and then calculate the efficient frontier. We graph our original simulation with the original weighting constraint (all assets are in the portfolio) below.

Recall that after we ran this simulation we averaged the weightings for those portfolios that achieved our constraints of not less than a 7% return and not more 10% risk on an annual basis. We then applied that weighting to our first five year test period. We show the weighting below.

Before we look at the forward returns and the efficient frontier, let’s see where our portfolio lies in the original simulation to orient ourselves. It’s the red dot.

As is clear, the portfolio ends up in the higher end of the continuum, but there are other portfolios that dominate it. Now the moment we’ve been waiting for—portfolio optimization! Taking a range of returns between the minimum and maximum of the simulated portfolios, we’ll calculate the optimal weights to produce the highest return for the lowest amount of risk.

Wow! That optimization stuff sure does work. The blue line representing the efficient frontier clearly shows that there are other portfolios that could generate much higher returns for the implied level of risk we’re taking on. Alternatively, if we move horizontally to the left we see that we could achieve the same level of return at a much lower level of risk, shown by where the blue line crosses above 7% return.

Recall for illustrative purposes we used a simple version for the original weight simulation that required an investment in all assets. When we relax that constraint, we get a much wider range of outcomes, as we pointed out in the last post. What if we ran the weighting simulation with the relaxed constraint? What would our simulation and allocation look like in that case? We show those results below.

We see a much broader range of outcomes, which yields a higher weighting to bonds and a lower one to gold than the previous portfolio. Now we’ll overlay the placement of our satisfactory portfolio on the broader weight simulation along with the efficient frontier in the graph below.

Who needs mean-variance optimization when you’ve got data science simulation?! As one can see, when you allow portfolio weights to approach zero in many, but not all, of the assets, you can approximate the efficient frontier without having to rely on quadratic programming. This should give new meaning to “p-hacking.” Still, quadratic programming is likely to be a lot faster that running thousands of simulations with a large portfolio of assets. Recall for the four asset portfolio when we relaxed the inclusion constraint, that tripled the number of simulations. Hence, for any simulation in which some portfolios won’t be invested in all the assets, the number of calculations increases by a factor of the total number of assets minus one.

Whatever the case, we see that the satisfactory portfolio may not be that satisfactory given how much it’s dominated by the efficient frontier. Recall, however, we weren’t trying to achieve an optimal portfolio per se. We “just” wanted a portfolio that would meet our risk-return constraints.

Let’s see what happens when we use our satisfactory portfolio’s weights on the first five-year test period. In the graph below, we calculate our portfolios risk and return and then place it within our weight simulation scatter plot. We also calculate the risk and returns of various portfolios using the weights we derived from our efficient frontier above and add that our graph as the blue line.

Uh oh, not so efficient. The weights from the previous efficient frontier did not achieve optimal portfolios in the future and produced an unusual shape too. This illustrates one of the main problems with mean-variance optimization: “optimal weights are sensitive to return estimates”. In other words, if your estimate of returns aren’t that great, your optimal portfolio weights won’t be so optimal. Moreover, even if your estimates reflect all presently available information, that doesn’t mean they’ll be that accurate in the future.

A great way to see this is to calculate the efficient frontier using as much of the data as we have, ignoring incomplete cases (which produces bias) and plotting that against the original and first five-year simulations

You win some; you lose some. As is evident, different return estimates yield different frontiers both retrospectively and prospectively. Should we be skeptical of mean mean-variance optimization as Warren Buffett is of “geeks bearing gifts”? Not really. It’s an elegant solution to the thorny problem of portfolio construction. But it’s not very dynamic and it doesn’t exactly allow for much uncertainty around estimates.

There have been a number attempts to address such shortcomings including multi-period models, inter-temporal models, and even a statistics-free approach, among others. Even summarizing these different approaches would take us far afield of this post. Suffice it to say, there isn’t a clear winner; instead each refinement addresses a particular issue or fits a particular risk preference.

We’ve now partially revealed why we’ve been talking about a “satisfactory” portfolio all along. It’s the trade-off between satsificing and optimal. While we cannot possibly discuss all the nuances of satisficing now, our brief explanation is this. Satisficing is finding the best available solution when the optimal one is uncertain or unattainable. It was a concept developed by Herbert Simon who argued that decision makers could choose an optimal solution to a simplified reality or a satisfactory solution to a messy one.

If the “optimal” solution to portfolio allocation is a moving target with multiple approaches to calculating it, many of which involve a great deal of complexity, then electing a “good-enough” solution might be more satisfactory. The cost to become conversant in the technical details necessary to understand some of the solutions, let alone compile all the data necessary, could be prohibitive. Of course, if you’re a fund manager being paid to outperform (i.e., beat everyone else trying to beat you), then it behooves you to seek out these arcane solutions if your commpetitors are apt to use them too.

This discussion explains, in part, why the “simple” 1/n or 60/40 stock/bond portfolios are so popular. The exercise of mean-variance optimization and all its offshoots may simply be too much effort if the answers it gives aren’t dramatically better than a simplified approach. But it would be wrong to lay the blame for poor results or uncertainty on MVO: financial markets have way more noise than signal.

In pursuit of the signal, our next posts will look at the “simple” portfolios and see what they produce over multiple simulations relative to the satisfactory and optimal portfolios we’ve already discussed. If you think this blog is producing more noise than signal or vice versa, we want to know! Our email address is after the R and Python code below.

R code:

# Written in R 3.6.2
# Code for any source('function.R') is found at the end. ## Load packages
suppressPackageStartupMessages({ library(tidyquant) library(tidyverse) library(quadprog)
}) ## Load data
df % ggplot(aes(risk*sqrt(12)*100, returns*1200, color = Sharpe)) + geom_point(size = 1.2, alpha = 0.4) + scale_color_gradient(low = "darkgrey", high = "darkblue") + labs(x = "Risk (%)", y = "Return (%)", title = "Simulated portfolios") out % ggplot(aes(risk*sqrt(12)*100, returns*1200, color = Sharpe)) + geom_point(size = 1.2, alpha = 0.4) + scale_color_gradient(low = "darkgrey", high = "darkblue") + labs(x = "Risk (%)", y = "Return (%)", title = "Simulated portfolios") out % mutate(returns = returns*12, risk = risk*sqrt(12)) %>% filter(returns >= return_min, risk % summarise_at(vars(4:7), mean) %>% `colnames% rename("Stocks" = stock, "Bonds" = bond, "Gold" = gold, "Real estate" = realt) %>% gather(key,value) %>% ggplot(aes(reorder(key,value), value*100 )) + geom_bar(stat='identity', position = "dodge", fill = "blue") + geom_text(aes(label=round(value,2)*100), vjust = -0.5) + scale_y_continuous(limits = c(0,max(port_wts*100+2))) + labs(x="", y = "Weights (%)", title = "Average weights for risk-return constraints") out 

Python code:

# Load libraries
import pandas as pd
import as web
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline'ggplot') # SKIP IF ALREADY HAVE DATA
# Load data
start_date = '1970-01-01'
end_date = '2019-12-31'
sym_names = ["stock", "bond", "gold", "realt", 'rfr']
filename = 'data_port_const.pkl' try: df = pd.read_pickle(filename) print('Data loaded')
except FileNotFoundError: print("File not found") print("Loading data", 30*"-") data = web.DataReader(symbols, 'fred', start_date, end_date) data.columns = sym_names data_mon = data.resample('M').last()
df = data_mon.pct_change()['1987':'2019']
# df.to_pickle(filename) # If you haven't saved the file dat = data_mon.pct_change()['1971':'2019']
# pd.to_pickle(df,filename) # if you haven't saved the file # Portfolio simulation functions ## Simulation function
class Port_sim: def calc_sim(df, sims, cols): wts = np.zeros((sims, cols)) for i in range(sims): a = np.random.uniform(0,1,cols) b = a/np.sum(a) wts[i,] = b mean_ret = df.mean() port_cov = df.cov() port = np.zeros((sims, 2)) for i in range(sims): port[i,0] = np.sum(wts[i,]*mean_ret) port[i,1] = np.sqrt([i,].T,port_cov), wts[i,])) sharpe = port[:,0]/port[:,1]*np.sqrt(12) best_port = port[np.where(sharpe == max(sharpe))] max_sharpe = max(sharpe) return port, wts, best_port, sharpe, max_sharpe def calc_sim_lv(df, sims, cols): wts = np.zeros(((cols-1)*sims, cols)) count=0 for i in range(1,cols): for j in range(sims): a = np.random.uniform(0,1,(cols-i+1)) b = a/np.sum(a) c = np.random.choice(np.concatenate((b, np.zeros(i))),cols, replace=False) wts[count,] = c count+=1 mean_ret = df.mean() port_cov = df.cov() port = np.zeros(((cols-1)*sims, 2)) for i in range(sims): port[i,0] = np.sum(wts[i,]*mean_ret) port[i,1] = np.sqrt([i,].T,port_cov), wts[i,])) sharpe = port[:,0]/port[:,1]*np.sqrt(12) best_port = port[np.where(sharpe == max(sharpe))] max_sharpe = max(sharpe) return port, wts, best_port, sharpe, max_sharpe def graph_sim(port, sharpe): plt.figure(figsize=(14,6)) plt.scatter(port[:,1]*np.sqrt(12)*100, port[:,0]*1200, marker='.', c=sharpe, cmap='Blues') plt.colorbar(label='Sharpe ratio', orientation = 'vertical', shrink = 0.25) plt.title('Simulated portfolios', fontsize=20) plt.xlabel('Risk (%)') plt.ylabel('Return (%)') # Constraint function
def port_select_func(port, wts, return_min, risk_max): port_select = pd.DataFrame(np.concatenate((port, wts), axis=1)) port_select.columns = ['returns', 'risk', 1, 2, 3, 4] port_wts = port_select[(port_select['returns']*12 >= return_min) & (port_select['risk']*np.sqrt(12) 

Be First to Comment

Leave a Reply

Your email address will not be published.