It's evident from the previous posts that parameter stability is an issue, at least when estimating a securities exposure to the market. This particular factor exposure is pretty much like market beta - maybe just like market beta (?). Providers of beta make a lot of adjustments. I don't know if any of this is appropriate in a factor model (anyone who knows about this please comment), but I thought I'd link to some of the related literature: On Estimating the Beta Coefficient, D. Bradfield; Periodic Return Time-Series, Capitalization Adjustments, and Beta Estimation, E. Jarnecic, M. McCorry & R. Winn; Capturing Market Risk in a Volatile World, MSCI Barra Research Bulletin.
Walt French, a quantitative fund manager, made some comments regarding parameter stability and multi-factor models that are interesting enough to be included (with his permission) in the body of the blog:
Let me propose a simple thought experiment. Consider a market in which stocks are affected by two common factors: future GDP and interest rates -- both of which drive a DDM-type price model -- and an idiosyncratic factor affecting mostly individual firms' market share. News about GDP and interest rates arrives at irregular intervals, as does idiosyncratic data on firms.
During the 60 months ending in December, 2005, GDP was the big story. The Fed just kept nodding its head, saying all was well. Stock prices reacted mostly to changes in the GDP outlook; "high beta" stocks were growth stocks most dependent on future expansion. But in the 24 months ending January, 2008, news from the Fed dominated the average stock's response. "Value" stocks were the big (down!) movers as changes in credit, as well as Fed intervention in credit, drove some stocks wild. The previously high-beta stocks didn't respond so much to the market average.
I think the multi-factor story is important in interpreting that beta coefficients should change over time, because we get a different mix of news. The Rosenberg and Guy paper highlights company changes as the basis for changing betas, but the macro-environmental changes also need to be understood as moving betas around, perhaps radically. From my perspective, this is more important than the noise of any single stock, since such noise cancels out, by definition, in larger portfolios, while the macro effects, being common factors, only become more obvious with portfolio diversification.
So forecasting beta requires a forecast over one's target time horizon of (1) each source of market variance and (2) various portfolios' sensitivities to those sources. Lots of papers have gone into the estimation errors when there is a single source of market variance, but modern practice -- and the multi-factor models that underly this blog's approach -- have to consider their variance forecasts as more important. In a diversified portfolio, these may be the greater source of beta uncertainty.
I found a link to the Rosenberg and Guy paper Walt referred to. Surprisingly, not everyone has heard of Barr Rosenberg. Barr is to Beta as Buddha is to Buddhism. Here, in addition, is a link to the Berkeley Research Program in Finance Working Papers, worth the cost of admission by themselves.
Tuesday, April 8, 2008
Parameter Stability - 3
My previous post (Parameter Stability-2) showed that even when I excluded outlying returns from the regression of IBM's and HP's monthly returns against the S&P 500 Index and the Fama French market factor, I still got sharp jumps in estimated beta as I moved across 36 years of history. But, as I realized while making the post, I didn't do the exclusion correctly. I should have re-estimated the mean and standard deviation for each sub-period. I redid the study, this time against the S&P 500 Index because it was convenient and because all I care about for the moment are methodological requirements for parameter stability. The estimation period is 36 months long. I excluded returns that were more than 2.9 standard deviations away from the mean for each period. No returns were excluded for most periods; one or two returns were excluded about 15% of the time; three returns were excluded in about 0.5% of the periods. Although the r-squares jump around a lot, the betas are a lot more stable now.
Data for IBM is on the left below; HP is on the right. The Mathematica code shown produced the two plots on the left. I pasted the ones on the right later.
Procedurally, the code gets lists of IBM, HP and the S&P returns from Wolfram Research, transposes two lists into a single list of paired values for the regression, and then 396 regressions, recalculating the sample mean and standard deviation for each, filtering the returns subset for outliers, running regression, adding the regression values for beta and r-squared to the end of the list {params}, and finally, plotting the values in the list. Pretty neat, huh? Click the image for a better view.
Data for IBM is on the left below; HP is on the right. The Mathematica code shown produced the two plots on the left. I pasted the ones on the right later.
Procedurally, the code gets lists of IBM, HP and the S&P returns from Wolfram Research, transposes two lists into a single list of paired values for the regression, and then 396 regressions, recalculating the sample mean and standard deviation for each, filtering the returns subset for outliers, running regression, adding the regression values for beta and r-squared to the end of the list {params}, and finally, plotting the values in the list. Pretty neat, huh? Click the image for a better view.
Labels:
HP,
IBM,
Mathematica,
Parameter Stability,
Regression
Parameter Stability - 2
My previous post (Parameter Stability-1) showed the betas and r-squares obtained from 366 rolling 36 month regressions of IBM against the Fama French market factor. I was surprised to see how unstable the regression parameters were. I suspected data outliers were the problem, but some preliminary testing showed that the sharp jumps in the parameters were not completely due to extreme returns. I decided to expand this study by looking at both HP and IBM, and regressing against both the S&P 500 Index as well as the FF market factors. I excluded stock returns that were more than 4 standard deviations from the stock's long term mean.
The results can be seen below. The upper four graphs show results for IBM; the lower four ones for HP. The graphs on the left show regressions against the Fama French data; the ones on the right are against the S&P 500. The graphs are paired: beta above and r-squared below. Click the image for a better view.
I was surprised at the results, but was somewhat comforted by the fact that they were pretty much the same for both stocks and against either index. However, just as I was writing this post, it occurred to me I hadn't grasped the implication of using the whole 36 year period as a basis for excluding returns. I should have exclude outliers based on the returns in each 36 month period. I'll do that next. But even though it's clear that I made a methodological error, comments would be appreciated.
The results can be seen below. The upper four graphs show results for IBM; the lower four ones for HP. The graphs on the left show regressions against the Fama French data; the ones on the right are against the S&P 500. The graphs are paired: beta above and r-squared below. Click the image for a better view.
I was surprised at the results, but was somewhat comforted by the fact that they were pretty much the same for both stocks and against either index. However, just as I was writing this post, it occurred to me I hadn't grasped the implication of using the whole 36 year period as a basis for excluding returns. I should have exclude outliers based on the returns in each 36 month period. I'll do that next. But even though it's clear that I made a methodological error, comments would be appreciated.
Labels:
Fama French,
HP,
IBM,
Mathematica,
Parameter Stability,
Regression
Parameter Stability - 1
My previous post (IBM vs the Fama French market factor) showed the results of a regression of 426 months of IBM returns against the Fama French market factor. I was working out the details of getting the data and using Mathematica to do the regression. Now let's take a closer look at that data. Let's look at the stability of IBM's exposure to the Fama French market factor. The image below shows a plot of 366 rolling 60-month factor betas, and the corresponding r-squares. Not very stable! However, notice the discontinuities in the r-square plot (enclosed by two red boxes). This strongly suggest that data outliers are causing problems. My next step will be to define procedures to identify and deal with outliers.
Look how little Mathematica code it took to do this work (Click on the image below to see the details). I like Mathematica more every day.
Look how little Mathematica code it took to do this work (Click on the image below to see the details). I like Mathematica more every day.
Labels:
Fama French,
IBM,
Mathematica,
Parameter Stability,
Regression
IBM vs the Fama French Market Factor
I'm more interested in economic factor models (EFM) than fundamental factor models. Since the heart of building an EFM is estimating the exposures (betas) of stocks to economic time series with regressions, I'm going to start with that. This is pretty basic, but I have an additional challenge in that I've decided to do everything with Mathematica (M-). It's extremely powerful, but has a steep learning curve. Starting with something basic will make it easier for me to learn about M-.
Finding sources for data and making sure it's good data is another challenge, and not a trivial one. MM helps me out a bit because version 6 comes with built in access to a financial database, but it's not perfect either. The Mathematic financial data function ( FinancialData ["IBM", "Returns", {to,from,period}] ) comes back with price returns. The financial database at Wolfram Research is a new thing for them, so they still have a few bugs to work out. I'll solve total return problem later.
My first exercise is simple. I want to calculate IBM's exposure to the Fama French market factor (this is basically IBM's market beta). This regression is based on approximately 36 years of monthly returns. The IBM's returns are price instead of instead of total returns, and not excess returns either, but I'm anxious to get started! Look how little M- code it took to load the data, do the regression and make two plots. The regression returns a beta of 1.30 with an r-squared of 34%. Not a bad r-squared! Click the image below for a better view.
Finding sources for data and making sure it's good data is another challenge, and not a trivial one. MM helps me out a bit because version 6 comes with built in access to a financial database, but it's not perfect either. The Mathematic financial data function ( FinancialData ["IBM", "Returns", {to,from,period}] ) comes back with price returns. The financial database at Wolfram Research is a new thing for them, so they still have a few bugs to work out. I'll solve total return problem later.
My first exercise is simple. I want to calculate IBM's exposure to the Fama French market factor (this is basically IBM's market beta). This regression is based on approximately 36 years of monthly returns. The IBM's returns are price instead of instead of total returns, and not excess returns either, but I'm anxious to get started! Look how little M- code it took to load the data, do the regression and make two plots. The regression returns a beta of 1.30 with an r-squared of 34%. Not a bad r-squared! Click the image below for a better view.
Thursday, April 3, 2008
A Gentle Introduction to Factor Models
Modern investment theory is based largely on two ideas. The first, due to the work of Harry Markowitz, is that in an efficient financial market, higher return expectations require higher risk exposures. The second, established by William F. Sharpe, is that since the risks associated with individual securities tend to cancel each other out in diversified portfolios, exposure to higher "specific" risks is not associated with higher expected returns. However, there is a payoff for exposure to greater "systematic" risk (risk that cannot be diversified away). Sharpe's theory is known as the Capital Asset Pricing Model (CAPM). Both Markowitz and Sharpe received Nobel Prizes for their seminal work.
Sharpe's CAPM defined systematic risk as the risk associated with exposure to the volatile, unpredictable swings of the stock market, and quantified systematic risk as the slope (beta) obtained from a regression of a securities' returns against the market's returns. The CAPM was the first factor model. While the CAPM attributed risk to a single systematic factor, arbitrage pricing theory (APT), first presented by Stephen Ross, established a firm theoretical foundation for the existence of multiple systematic sources of risk and return, and paved the way for the multi-factor models of today.
Contemporary factor models are generally classified in three groups: fundamental, statistical, and econometric. Each of type of model has its strengths and weaknesses, but with the proper statistical approach, an econometric factor model can combines all three types of factors. Due to it's intuitive appeal and its generality, we are going to focus on (and build) an econometric factor model.
Econometric factor models are based on the idea that the returns of individual assets are influenced by the market itself and other broadly influential factors like unemployment rates, interest rates, consumer sentiment, business activity, and factors which have been given names like "size" or "value/growth" that are thought to be related to, for example, the risk associated with companies that are small, not commonly know, or poorly researched companies (size), and companies that are recently troubled, out of favor, not currently valued at a premium (value/growth).
In an econometric factor model, while the values of the economic factors are known, the "exposure" of individual stocks to the economic factors are not know. The exposures for each stock must be estimated with a multi-factor regression of the stock's returns over some time period to the values of the economic factors during that time period. The regression model is a conventional one, resulting in an alpha, the exposures (also called the factor betas), and an error term for each stock. The error term is a normally distributed variable with a zero mean and a different size for each stock in the regression model.
The table below shows the factor exposure estimates for the stocks discussed in QEPM's chapter on econometric models (Table 7.4, page 224). These estimates are based on regressions for the period from January 2000 to December 2003.
The factor exposures should make economic sense. For example, Wal-Mart goes up when unemployment increases. Exxon Mobil is least affected by consumer sentiment. Microsoft is the most sensitive to the market. The returns of all of these large company stocks are hurt during periods when large stocks do poorly.
It's easy to write the equation for the expected return of a portfolio of stocks one you have estimated the alpha and factor exposures of the stocks in the portfolio, because portfolios have the alpha and factor exposures of a weighted average of the alphas and exposures of the stocks it holds. The expected return of a portfolio equals its alpha plus the sum of its factor exposures times the values either observed or forecast for the economic time series variables that make up the model.
The risk equation is more complicated (we'll discuss it later), but once you have the risk and return equations you can maximize a portfolio's return given a specific level of risk, minimize its tracking error to a benchmark, or minimize its exposure to one risk while maintaining its exposure to others.
For a less gentle introduction to factor models, see: Estimating A Combined Linear Factor Model. (There is a large literature on factor models. I'll add a few more links when I find good ones.)
Having finished this post, I'll get back to building one.
Sharpe's CAPM defined systematic risk as the risk associated with exposure to the volatile, unpredictable swings of the stock market, and quantified systematic risk as the slope (beta) obtained from a regression of a securities' returns against the market's returns. The CAPM was the first factor model. While the CAPM attributed risk to a single systematic factor, arbitrage pricing theory (APT), first presented by Stephen Ross, established a firm theoretical foundation for the existence of multiple systematic sources of risk and return, and paved the way for the multi-factor models of today.
Contemporary factor models are generally classified in three groups: fundamental, statistical, and econometric. Each of type of model has its strengths and weaknesses, but with the proper statistical approach, an econometric factor model can combines all three types of factors. Due to it's intuitive appeal and its generality, we are going to focus on (and build) an econometric factor model.
Econometric factor models are based on the idea that the returns of individual assets are influenced by the market itself and other broadly influential factors like unemployment rates, interest rates, consumer sentiment, business activity, and factors which have been given names like "size" or "value/growth" that are thought to be related to, for example, the risk associated with companies that are small, not commonly know, or poorly researched companies (size), and companies that are recently troubled, out of favor, not currently valued at a premium (value/growth).
In an econometric factor model, while the values of the economic factors are known, the "exposure" of individual stocks to the economic factors are not know. The exposures for each stock must be estimated with a multi-factor regression of the stock's returns over some time period to the values of the economic factors during that time period. The regression model is a conventional one, resulting in an alpha, the exposures (also called the factor betas), and an error term for each stock. The error term is a normally distributed variable with a zero mean and a different size for each stock in the regression model.
The table below shows the factor exposure estimates for the stocks discussed in QEPM's chapter on econometric models (Table 7.4, page 224). These estimates are based on regressions for the period from January 2000 to December 2003.
The factor exposures should make economic sense. For example, Wal-Mart goes up when unemployment increases. Exxon Mobil is least affected by consumer sentiment. Microsoft is the most sensitive to the market. The returns of all of these large company stocks are hurt during periods when large stocks do poorly.
It's easy to write the equation for the expected return of a portfolio of stocks one you have estimated the alpha and factor exposures of the stocks in the portfolio, because portfolios have the alpha and factor exposures of a weighted average of the alphas and exposures of the stocks it holds. The expected return of a portfolio equals its alpha plus the sum of its factor exposures times the values either observed or forecast for the economic time series variables that make up the model.
The risk equation is more complicated (we'll discuss it later), but once you have the risk and return equations you can maximize a portfolio's return given a specific level of risk, minimize its tracking error to a benchmark, or minimize its exposure to one risk while maintaining its exposure to others.
For a less gentle introduction to factor models, see: Estimating A Combined Linear Factor Model. (There is a large literature on factor models. I'll add a few more links when I find good ones.)
Having finished this post, I'll get back to building one.
Tuesday, April 1, 2008
Bayesian Statistics + factor models = Alpha Mojo
Bayesian statistics can be used to combine qualitative investment ideas with quantitative financial models in a objective manner. Chincarini & Kim refer to Bayesian statics, along with the use of leverage, and market neutral exposure, as a source of "Alpha Mojo." We will talk about Bayesian statistics in more detail later, after we have finished building our factor model, but I thought readers might enjoy the following links to articles on Bayesian statistics. Both are written by Eliezer Yudkowsky, who is a Research Fellow at The Singularity Institute for Artificial Intelligence. Yudkowsky also wrote Creating Friendly AI 1.0, which people with an interest in artificial intelligence and The Singularity will find interesting.
Labels:
alpha mojo,
Bayesian statistics,
singularity,
yudkowsky
Subscribe to:
Posts (Atom)