Optimal Versus Naive Diversification:
How Inefficient is the 1/N Portfolio Strategy?
Business School
Copyright By PowCoder代写 加微信 powcoder
University of Texas at Uppal
London Business School and CEPR
We evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1/N portfolio. Of the 14 models we evaluate across seven empirical datasets, none is consistently better than the 1/N rule in terms of Sharpe ratio, certainty-equivalent return, or turnover, which indicates that, out of sample, the gain from optimal diversification is more than offset by estimation error. Based on parameters calibrated to the US equity market, our analytical results and simulations show that the estimation window needed for the sample-based mean-variance strategy and its extensions to outperform the 1/N benchmark is around 3000 months for a portfolio with 25 assets and about 6000 months for a portfolio with 50 assets. This suggests that there are still many “miles to go” before the gains promised by optimal portfolio choice can actually be realized out of sample. (JEL G11)
In about the fourth century, bar Aha proposed the following rule for asset allocation: “One should always divide his wealth into three parts: a third in land, a third in merchandise, and a third ready to hand.”1 After a “brief”
We wish to thank (the editor), two anonymous referees, and Lˇ ubosˇ Pa ́stor for extensive comments; and for their suggestions and for making available their data and computer code; and for making available data on the ten sector portfolios of the S&P 500 Index. We also gratefully acknowledge the comments from , , , , , , Francisco Gomes, , , Francisco Nogales, , , , , , , , Zhenyu Wang, and seminar participants at BI Norwegian School of Management, HEC Lausanne, HEC Montre ́al, London Business School, Manchester Business School, Stockholm School of Economics, University of Mannheim, University of Texas at Austin, University of Venice, University of Vienna, the Second Mc on Global Asset Management, the 2004 International Symposium on Asset Allocation and Pension Management at Copenhagen Business School, the 2005 Conference on Developments in Quantitative Finance at the Institute for Mathematical Sciences at Cambridge University, the 2005 Workshop on Optimization in Finance at University of Coimbra, the 2005 meeting of the Western Finance Association, the 2005 annual meeting of INFORMS, the 2005 conference of the Institute for Quantitative Investment Research (Inquire UK), the 2006 NBER Summer Institute, the 2006 meeting of the European Finance Association, and the First Annual Meeting of the Swiss Finance Institute. Send correspondence to , Mc of Business, University of Texas at Austin, Austin, TX 78712; telephone: (512) 471-5682; fax: (512) 471-5073. E-mail:
1 Babylonian Talmud: Mezi’a, folio 42a.
⃝C The Author 2007. Published by Oxford University Press on behalf of The Society for Financial Studies. All rights reserved. For Permissions, please e-mail: doi:10.1093/rfs/hhm075 Advance Access publication December 3, 2007
程序代写 加微信 cstutorcs
The Review of Financial Studies / v 22 n 5 2009
lull in the literature on asset allocation, there have been considerable advances starting with the pathbreaking work of Markowitz (1952),2 who derived the optimal rule for allocating wealth across risky assets in a static setting when investors care only about the mean and variance of a portfolio’s return. Be- cause the implementation of these portfolios with moments estimated via their sample analogues is notorious for producing extreme weights that fluctuate substantially over time and perform poorly out of sample, considerable effort has been devoted to the issue of handling estimation error with the goal of improving the performance of the Markowitz model.3
A prominent role in this vast literature is played by the Bayesian approach to estimation error, with its multiple implementations ranging from the purely statistical approach relying on diffuse-priors (Barry, 1974; Bawa, Brown, and Klein, 1979), to “shrinkage estimators” (Jobson, Korkie, and Ratti, 1979; Jobson and Korkie, 1980; Jorion, 1985, 1986), to the more recent approaches that rely on an asset-pricing model for establishing a prior (Pa ́stor, 2000; Pa ́stor and Stambaugh, 2000).4 Equally rich is the set of non-Bayesian approaches to estimation error, which include “robust” portfolio allocation rules (Goldfarb and Iyengar, 2003; Garlappi, Uppal, and Wang, 2007); portfolio rules designed to optimally diversify across market and estimation risk (Kan and Zhou, 2007); portfolios that exploit the moment restrictions imposed by the factor struc- ture of returns (MacKinlay and Pastor, 2000); methods that focus on reducing the error in estimating the covariance matrix (Best and Grauer, 1992; Chan, Karceski, and Lakonishok, 1999; Ledoit and Wolf, 2004a, 2004b); and, finally, portfolio rules that impose shortselling constraints (Frost and Savarino, 1988; Chopra, 1993; Jagannathan and Ma, 2003).5
Our objective in this paper is to understand the conditions under which mean-variance optimal portfolio models can be expected to perform well even in the presence of estimation risk. To do this, we evaluate the out-of-sample performance of the sample-based mean-variance portfolio rule—and its various extensions designed to reduce the effect of estimation error—relative to the performance of the naive portfolio diversification rule. We define the naive rule to be one in which a fraction 1/N of wealth is allocated to each of the N assets available for investment at each rebalancing date. There are two reasons for using the naive rule as a benchmark. First, it is easy to implement because it does not rely either on estimation of the moments of asset returns or on
2 Some of the results on mean-variance portfolio choice in Markowitz (1952, 1956, 1959) and Roy (1952) had already been anticipated in 1940 by de Finetti, an English translation of which is now available in Barone (2006).
3 For a discussion of the problems in implementing mean-variance optimal portfolios, see Hodges and Brealey (1978), Michaud (1989), Best and Grauer (1991), and Litterman (2003). For a general survey of the literature on portfolio selection, see Campbell and Viceira (2002) and Brandt (2007).
4 Another approach, proposed by Black and Litterman (1990, 1992), combines two sets of priors—one based on an equilibrium asset-pricing model and the other on the subjective views of the investor—which is not strictly Bayesian, because a Bayesian approach combines a prior with the data.
5 Michaud (1998) has advocated the use of resampling methods; Scherer (2002) and Harvey et al. (2003) discuss the various limitations of this approach.
程序代写 加微信 cstutorcs
Optimal Versus Naive Diversification
List of various asset-allocation models considered
0. 1/N with rebalancing (benchmark strategy) Classical approach that ignores estimation error
1. Sample-based mean-variance
Bayesian approach to estimation error
2. Bayesian diffuse-prior
3. Bayes-Stein
4. Bayesian Data-and-Model
Moment restrictions
5. Minimum-variance
6. Value-weighted market portfolio
7. MacKinlay and Pastor’s (2000) missing-factor model
Portfolio constraints
8. Sample-based mean-variance with shortsale constraints
9. Bayes-Stein with shortsale constraints
10. Minimum-variance with shortsale constraints
11. Minimum-variance with generalized constraints
Optimal combinations of portfolios
12. Kan and Zhou’s (2007) “three-fund” model
13. Mixture of minimum-variance and 1/ N
14. Garlappi, Uppal, and Wang’s (2007) multi-prior model
Abbreviation
ew or 1/N mv
Not reported bs
mv-c bs-c min-c g-min-c
mv-min ew-min
Not reported
This table lists the various asset-allocation models we consider. The last column of the table gives the abbreviation used to refer to the strategy in the tables where we compare the performance of the optimal portfolio strategies to that of the 1/N strategy. The results for two strategies are not reported. The reason for not reporting the results for the Bayesian diffuse-prior strategy is that for an estimation period that is of the length that we are considering (60 or 120 months), the Bayesian diffuse-prior portfolio is very similar to the sample-based mean-variance portfolio. The reason for not reporting the results for the multi-prior robust portfolio described in Garlappi, Uppal, and Wang (2007) is that they show that the optimal robust portfolio is a weighted average of the mean-variance and minimum-variance portfolios, the results for both of which are already being reported.
optimization. Second, despite the sophisticated theoretical models developed in the last 50 years and the advances in methods for estimating the parameters of these models, investors continue to use such simple allocation rules for allocating their wealth across assets.6 We wish to emphasize, however, that the purpose of this study is not to advocate the use of the 1/N heuristic as an asset-allocation strategy, but merely to use it as a benchmark to assess the performance of various portfolio rules proposed in the literature.
We compare the out-of-sample performance of 14 different portfolio models relative to that of the 1/N policy across seven empirical datasets of monthly returns, using the following three performance criteria: (i) the out-of-sample Sharpe ratio; (ii) the certainty-equivalent (CEQ) return for the expected utility of a mean-variance investor; and (iii) the turnover (trading volume) for each portfolio strategy. The 14 models are listed in Table 1 and discussed in Section 1. The seven empirical datasets are listed in Table 2 and described in Appendix A.
6 For instance, Benartzi and Thaler (2001) document that investors allocate their wealth across assets using the naive 1/N rule. Huberman and Jiang (2006) find that participants tend to invest in only a small number of the funds offered to them, and that they tend to allocate their contributions evenly across the funds that they use, with this tendency weakening with the number of funds used.
程序代写 加微信 cstutorcs
The Review of Financial Studies / v 22 n 5 2009
List of datasets considered
# Dataset and source
1 Ten sector portfolios of the S&P 500 and the US equity market portfolio
2 Ten industry portfolios and
the US equity market portfolio Source: ’s Web site
3 Eight country indexes and the World Index
Source: MSCI
4 SMB and HML portfolios and the US equity market portfolio Source: ’s Web site
5 Twenty size- and book-to-market portfolios and the US equity MKT
Source: ’s Web site
6 Twenty size- and book-to-market portfolios and the MKT, SMB, and HML portfolios
Source: ’s Web site
7 Twenty size- and book-to-market portfolios and the MKT, SMB, HML, and UMD portfolios Source: ’s Web site
8 Simulated data Source: Market model
10 + 1 10+1 8+1 2+1
20 + 1 20 + 3 20 + 4 {10, 25, 50}
Time period
01/1981–12/2002 07/1963–11/2004 01/1970–07/2001 07/1963–11/2004 07/1963–11/2004 07/1963–11/2004 07/1963–11/2004 2000 years
Abbreviation
S&P Sectors Industry International MKT/SMB/HML FF-1-factor FF-3-factor FF-4-factor
This table lists the various datasets analyzed; the number of risky assets N in each dataset, where the number after the “+” indicates the number of factor portfolios available; and the time period spanned. Each dataset contains monthly excess returns over the 90-day nominal US T-bill (from ’s Web site). In the last column is the abbreviation used to refer to the dataset in the tables evaluating the performance of the various portfolio strategies. Note that as in Wang (2005), of the 25 size- and book-to-market-sorted portfolios, we exclude the five portfolios containing the largest firms, because the market, SMB, and HML are almost a linear combination of the 25 Fama-French portfolios. Note also that in Datasets #5, 6, and 7, the only difference is in the factor portfolios that are available: in Dataset #5, it is the US equity MKT; in Dataset #6, they are the MKT, SMB, and HML portfolios; and in Dataset #7, they are the MKT, SMB, HML, and UMD portfolios. Because the results for the “FF-3-factor” dataset are almost identical to those for “FF-1-factor,” only the results for “FF-1-factor” are reported.
Our first contribution is to show that of the 14 models evaluated, none is consistently better than the naive 1/N benchmark in terms of Sharpe ratio, certainty-equivalent return, or turnover. Although this was shown in the lit- erature with regard to some of the earlier models,7 we demonstrate that this is true: (i) for a wide range of models that include several developed more recently; (ii) using three performance metrics; and (iii) across several datasets. In general, the unconstrained policies that try to incorporate estimation error perform much worse than any of the strategies that constrain shortsales, and also perform much worse than the 1/N strategy. Imposing constraints on the sample-based mean-variance and Bayesian portfolio strategies leads to only a modest improvement in Sharpe ratios and CEQ returns, although it shows a substantial reduction in turnover. Of all the optimizing models studied here, the minimum-variance portfolio with constraints studied in Jagannathan and Ma
7 Bloomfield, Leftwich, and Long (1977) show that sample-based mean-variance optimal portfolios do not out- perform an equally-weighted portfolio, and Jorion (1991) finds that the equally-weighted and value-weighted indices have an out-of-sample performance similar to that of the minimum-variance portfolio and the tangency portfolio obtained with Bayesian shrinkage methods.
程序代写 加微信 cstutorcs
Optimal Versus Naive Diversification
(2003) performs best in terms of Sharpe ratio. But even this model delivers a Sharpe ratio that is statistically superior to that of the 1/N strategy in only one of the seven empirical datasets, a CEQ return that is not statistically superior to that of the 1/N strategy in any of these datasets, and a turnover that is always higher than that of the 1/N policy.
To understand better the reasons for the poor performance of the optimal portfolio strategies relative to the 1/N benchmark, our second contribution is to derive an analytical expression for the critical length of the estimation window that is needed for the sample-based mean-variance strategy to achieve a higher CEQ return than that of the 1/N strategy. This critical estimation- window length is a function of the number of assets, the ex ante Sharpe ratio of the mean-variance portfolio, and the Sharpe ratio of the 1/N policy. Based on parameters calibrated to US stock-market data, we find that the critical length of the estimation window is 3000 months for a portfolio with only 25 assets, and more than 6000 months for a portfolio with 50 assets. The severity of estimation error is startling if we consider that, in practice, these portfolio models are typically estimated using only 60 or 120 months of data.
Because the above analytical results are available only for the sample-based mean-variance strategy, we use simulated data to examine its various extensions that have been developed explicitly to deal with estimation error. Our third contribution is to show that these models too need very long estimation windows before they can be expected to outperform the 1/N policy. From our simulation results, we conclude that portfolio strategies from the optimizing models are expected to outperform the 1/N benchmark if: (i) the estimation window is long; (ii) the ex ante (true) Sharpe ratio of the mean-variance efficient portfolio is substantially higher than that of the 1/N portfolio; and (iii) the number of assets is small. The first two conditions are intuitive. The reason for the last condition is that a smaller number of assets implies fewer parameters to be estimated and, therefore, less room for estimation error. Moreover, other things being equal, a smaller number of assets makes naive diversification less effective relative to optimal diversification.
The intuition for our findings is that to implement the mean-variance model, both the vector of expected excess returns over the risk-free rate and the variance-covariance matrix of returns have to be estimated. It is well known (Merton, 1980) that a very long time series of data is required in order to estimate expected returns precisely; similarly, the estimate of the variance-covariance matrix is poorly behaved (Green and Hollifield, 1992; Jagannathan and Ma, 2003). The portfolio weights based on the sample estimates of these moments result in extreme positive and negative weights that are far from optimal.8 As
8 Consider the following extreme two-asset example. Suppose that the true per annum mean and volatility of returns for both assets are the same, 8% and 20%, respectively, and that the correlation is 0.99. In this case, because the two assets are identical, the optimal mean-variance weights for the two assets would be 50%. If, on the other hand, the mean return on the first asset is not known and is estimated to be 9% instead of 8%, then the mean-variance model would recommend a weight of 635% in the first asset and −535% in the second. That is,
程序代写 加微信 cstutorcs
The Review of Financial Studies / v 22 n 5 2009
a result, “allocation mistakes” caused by using the 1/N weights can turn out to be smaller than the error caused by using the weights from an optimizing model with inputs that have been estimated with error. Although the “error- maximizing” property of the mean-variance portfolio has been described in the literature (Michaud, 1989; Best and Grauer, 1991), our contribution is to show that because the effect of estimation error on the weights is so large, even the models designed explicitly to reduce the effect of estimation error achieve only modest success.
A second reason why the 1/N rule performs well in the datasets we consider is that we are using it to allocate wealth across portfolios of stocks rather than individual stocks. Because diversified portfolios have lower idiosyncratic volatility than individual assets, the loss from naive as opposed to optimal diversification is much smaller when allocating wealth across portfolios. Our simulations show that optimal diversification policies will dominate the 1/N rule only for very high levels of idiosyncratic volatility. Another advantage of the 1/N rule is that it is straightforward to apply to a large number of assets, in contrast to optimizing models, which typically require additional parameters to be estimated as the number of assets increases.
In all our experiments, the choice of N has been dictated by the dataset. A natural question that arises then is: What is N? That is, for what number and kind of assets does the 1/N strategy outperform other optimizing portfolio models? The results show that the naive 1/N strategy is more likely to outper- form the strategies from the optimizing models when: (i) N is large, because this improves the potential for diversification, even if it is naive, while at the same time increasing the number of parameters to be estimated by an optimiz- ing model; (ii) the assets do not have a sufficiently long data history to allow for a precise estimation of the moments. In the empirical analysis, we consider datasets with N = {3, 9, 11, 21, 24} and assets from equity portfolios
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com