When to Buy and When to Sell Equities?
Investors face the fundamental problem of assessing the value of equities
as an investment vehicle. Are equities significantly undervalued offering
an opportunity to provide rising future returns? Alternatively, are equities
overvalued or fairly valued? There are a number of answers to these questions, but few concrete solutions. This paper aims to categorise the main
investment models which have a basis in economics.
There are three basic approaches for the analysis of the value of the
equity market as a whole, rather than the shares of individual companies. This paper aims to explore them from the perspective of their
broad conceptual underpinnings, the available evidence and their data
requirements. Broadly the three main models to be discussed, which
include a number of sub-variants, can be categorised as follows: Shiller’s
cyclically Adjusted Price-Earnings model (CAPE), Tobin’s q (QR) and
the Equitisation Ratio (ER).
The CAPE (cyclically adjusted price/earnings ratio) measure of the relative
value of equities is based on the familiar price earnings (p/e) investment ratio used by analysts as a measure of fundamental value.
A p/e ratio is
calculated by dividing a company’s current share price by its earnings per
share (EPS) or by the ratio of market capitalisation to after-tax earnings.
EPS are measured as earnings divided by the weighted average number
of shares in issue during the relevant period.
The relative value of a company’s share is assessed by comparing actual
or forecast p/e ratios measured against a yardstick of fair value based on the
p/e multiples of rival companies, the sector in which the company trades
or the equity market as a whole.
In order to assess the relative value of the
stock market, the relevant p/e ratio used for comparative purposes is the
aggregate of a collection of companies comprising a market index such as
the S&P 500 index, for example. Since market indices are highly volatile
over time, the core of the equity valuation question is in assessing whether
or not the market itself is over- or underpriced.
Assessing the relative value of the S&P 500 or the FTSE 250, for example, necessitates that the current value of the market p/e ratio is again
measured against some fair value yardstick. There are a number of these
in use including those based on a long-term relationship between the
return on equities and that of relatively risk-free assets such as sovereign
This paper analyses one of them which has increased in popularity
over the last decade – the cyclically adjusted price earnings ratio (CAPE)
first publicised by Professor Shiller in a book published in 2000.
successfully predicted a market correction was due since the shares traded
on the US stock market were overpriced on the basis of their long-term
intrinsic value. In order to calculate CAPE, Shiller adjusted the S&P 500
p/e ratio, measured as an index over the trailing-12-month earnings of S&P 500 companies, and calculated 10-year moving averages of market
value in order to adjust for the short-term impact of the booms and busts
of the business cycle.
The CAPE is based on the assumption that actual indicators of market
value may be distorted by the prevailing economic climate, but that there
is generally a regression to a long-term inflation-adjusted mean value for
the share of earnings in GDP and for the rate of return on equities. During
economic expansions, companies have high profit margins and earnings
rise so the market p/e ratio becomes artificially low compared to its longterm average. In contrast, during recessions, profit margins are low and
earnings are low so the regular p/e ratio becomes higher. In consequence,
a 10-year moving average of inflation-adjusted monthly p/e data is calculated to smooth out the effect on relative value of these fluctuations in
economic activity and for use as a yardstick of fair value.
The difference between Shiller’s CAPE and the S&P trailing market
p/e since January 1881 up until January 2008, the year that the global
banking and sovereign debt crisis commenced, is illustrated in Figure 1
using monthly data.
Signals of relative over- or undervaluation of the
stock market are given by the difference between the running value of
Shiller’s CAPE and its long-term average value. This third series is negative if the running value of CAPE is less than its long-term average indicating that equities are undervalued, while a positive difference implies
that the market is overvalued. CAPE figures for each month can also be
compared with their long-term values over decades.
With the benefit of hindsight there are some periods where Shiller’s CAPE appears to have predicted the relative valuation of the stock market
with some success. For example, in September 1929, just before the Wall
Street crash of october of that year, the trailing market p/e was 20.17,
which was high in historic terms, but not as high as the CAPE value of
32.56 which would have compared unfavourably with the long-term average CAPE p/e of 14.83 calculated between January 1881 and September
1929. Similarly, at the height of the dot-com bubble, Shiller’s CAPE had
reached a value of 44.2 by December 1999, well above its long-term average from 1881 which at that point was 15.48. In contrast, the S&P 500
p/e was at 29.66.
2.2 Critical issues
The use of CAPE as a long-term indicator of market value is based on
the fundamental assumption that the proportion of corporate earnings to
GDP is mean reverting. Without this, using any p/e ratio (the Shiller p/e
is a special case with an assumption that the average economic cycle lasts
10 years) to measure the relative value of the stock market can be problematic. Shiller’s CAPE, as a long-term indicator of value, for example,
assumes that the standard yardstick, the 10-year moving average of inflation adjusted market p/e’s is mean-regressing with no trend.
Some investment professionals have argued that it is misleading to use
the long-run average of CAPE covering a period as long as 1881 to 2011 since there are some historic anomalies that distort the underlying cyclical
pattern. The average value of Shiller’s CAPE over the longest time frame
is 16.42. However, from 1881 to December 1959 there is, if anything, a
very weak downward time trend in the p/e ratio, but this is followed by a
marked upward trend thereafter. There appears to be a statistically significant break in the series. The average value of CAPE from 1960 to 2011 is
calculated at 19.44 compared with the earlier period up to 1959, calculated
at an average value of 14.44.
The most reasonable and popular explanation for this break appears to
be that the long-period CAPE includes some abnormal periods following
the two World Wars with the retraction in world trade and globalisation
occasioned by the Great Depression of the 1930s. The preferred time
frame for many analysts is to calculate long-term averages from 1960 after
the impact of the Second World War had been worked out in the rebuilding of Europe. Some analysts have also argued that the 10 years after 1914
should be excluded from any long-term average p/e as US companies benefitted from supplying the European combatants in World War one, experiencing an exceptional upswing in profits followed by a sharp decline in
1921. However, the average value of CAPE between January 1881 and
July 1914 was 16.6, which is not statistically significantly different from its
value from 1881 to 2011, so the exceptional historical period criticism of
the method does not seem to hold.
2.3 Data requirements
Historical values for CAPE for the S&P 500 index in the US are available
from Shiller’s website which is regularly updated.
The dataset contains
monthly stock price, dividends, and earnings data and the consumer price
index (to allow conversion to real values), starting from January 1871,
so the first CAPE estimate in the data series is from January 1881. The
S&P 500 index only came into being in 1957, so Shiller’s earlier price and
earnings data involve a number of assumptions, adjustments and interpolations. After 1957, however, calculations of CAPE are based on relatively
timely published data. monthly CAPE estimates are calculated by Shiller
using publically available data, the S&P Index adjusted for inflation, and
earnings data for S&P 500 companies.
There is a lag of four months so the CAPE for July 2012, for example, is based on earnings data for April
of that year.
The most controversial data issue concerning CAPE is the information
on earnings required for the denominator of the ratio and this also concerns non-cyclically adjusted p/e ratios. Indeed, one plausible explanation
for the more recent upward trend in the value of CAPE can be found in
a consideration of the second issue: the data requirements for calculating
measurement of earnings data. Both the trailing S&P 500 p/e and the
CAPE valuation models require accounting data on historic corporate
earnings and this involves issues of definition, accuracy and transparency.
The definition of earnings used in the denominator of valuation ratios
based on p/e ratios is not clear-cut. Shiller’s CAPE data set, which looks at
the value of the S&P 500 in relation to its long-term value, use reported
‘earnings per share’ on the basis of the obligations laid down on companies
by US accounting standards. A serious criticism of Shiller’s method arises
from changes over time in measurement in earnings arising from tighter
accounting standards and in a better understanding of what constitutes a
company’s net income. This means that like is not being compared with
like. Therefore, arguing that share prices appear overvalued currently on
the basis of comparing CAPE against its long-term value fails to recognise
that improvements in the measurement of the quality of earnings are not
reflected in the earlier p/e ratios used in the calculations.
The last four decades have seen a number of reforms as a result of the
creation of the Financial Accounting Standards Board (FASB) in the US in
1973. over the years, the work of the FASB has improved the information
available to investors by implementing mandatory requirements standards that have removed accounting
treatments that flattered earnings
such as over-generous acquisition
provisions and ending the capitalisation of borrowing costs, for example.
The implementation of accounting reforms means that the quality of
has steadily improved. This means that in order to compare like
with like it would be necessary to make downward adjustments in past estimates of earnings data to account for the changes which would raise
the value of cyclically adjusted historic p/e ratios.
Tobin’s q is another measure of relative equity market value which has
been popularised within the investment community after the publication
of a book jointly authored by investment professional Andrew Smithers
and academic economist Stephen Wright in the same year as Shiller’s
book which popularised the CAPE concept. The authors came to the
same conclusion as Shiller, albeit using a different yardstick of fair value,
that the US equity market was overvalued in 2000. The authors based
their analysis on the theory proposed over 40 years ago by nobel Prize
winner Professor James Tobin concerning the importance of the ratio,
labelled q, which measures the market value of a firm including its equity
and debt to the net replacement value of its assets. According to Tobin
(1969) the value of q is seen as a proxy for the rate of return on capital and
the competitive conditions prevailing in an industry. The q ratio can be
measured as follows:
Tobin’s q = Market value of firm/replacement value of assets
In theory, if the value of q is greater than 1, then the numerator is greater
than the denominator which implies that a firm is earning excess profits
since its market value is greater than the cost of replication by renewing
its entire capital stock. If the value of q is less than 1 then it implies that
the capital market is signalling that the company’s assets would be better
employed under different ownership and that the rate of return is below
the competitive level. In a competitive general equilibrium with all firms
earning the competitive rate of return the value of q should be equal to
unity, which if it can be measured, becomes a yardstick measure of fair
value for the equity market.
In terms of a guide to the relative valuation of corporate investments, if
q is greater than 1, it implies that investors are all placing a higher value on
equities than the cost of replacing them which means that, in the absence
of persistent aggregate market power, the equity market is overvalued.
A value of q for the market less than 1 means that the equity market
is undervalued. There have been a number of academic studies that
have attempted to develop the concept of Tobin’s q such as the work by
Hayashi (1982) in order to look at the conditions under which potentially
observable average estimates can be used as proxies for unobservable, but
theoretically important, marginal values.
For the use of the investment community, regular estimates of the relative value of the S&P 500 using calculated values of Tobin’s q are made
monthly by UK investment firm Smithers & co. Smithers & co. provide
long-run estimates of q for the US stock market based on aggregating data
from two sources: annual estimates of q for the period 1900–1951 are taken
from a paper by Stephen Wright, while quarterly estimates of q from 1952
onwards are derived using data published by the Federal Reserve. These
combined estimates for q are shown in Figure 2 which suggests that the
estimates for q in 1929 at 1.33 and in 2000 at 1.24 were both signalling an
overvalued stock market. These estimates are considerably higher at 91%
and 78% respectively than the long-term arithmetic average of q of 0.69.
An alternative set of bottom-up estimates for the value of q is provided
on a quarterly basis by John Mihaljevic, one of James Tobin’s former
researchers. mihaljevic’s company estimates the value of q for a set of
the 1,000 largest US listed companies using an estimation methodology
employed by chung and Pruitt (1994). Chung and Pruitt use a formula
that can be calculated using publicly available company-specific accounting and market pricing data.
3.2 Critical issues
In theory, Tobin’s q ratio assumes that in the long-run, market forces should
mean that the theoretical value of q for any company, sector or the entire
market will be unity. However, Tobin’s q
ratio is impossible to calculate with precision, as the concept of replacement value
is inherently subjective. A criticism of the
Tobin’s q approach to valuing equities
has been that measures of q fail to incorporate intangible assets that have not been placed on balance sheets, but
which still affect the rates of return earned on capital. To the extent that
intangibles are ignored, estimated values of q understate true replacement
cost resulting in estimates of q higher than ‘true’ q, making assets appear
overvalued when they are fairly valued or undervalued. At the company
level, Tobin’s q ratios will vary significantly from industry to industry
due to the fact that companies in some industries employ relatively little
capital and therefore generate unusually high returns on capital. In addition, company-specific q estimation method is not good at dealing with
exceptional businesses, i.e. companies that have a large off-balance sheet,
intangible source of sustainable business value such as Apple, although
it might be expected that these problems cancel out at the market level.
This criticism that the denominator of the q ratio will be biased downwards causing an upward bias in the value of q was voiced by Hall (2001)
and laitner and Stolyarov (2003). If true, it implies that although the
long-run value of q should be close to unity, in theory, problems with the
valuation of intangibles mean that measured q's will be consistently biased
upwards with a typical value above 1. laitner and Stolyarov found that
Tobin’s average q has usually been well above 1, but fell below 1 during 1974–1984. The authors state that while the stock market value in
the numerator of q reflects ownership of physical capital and knowledge,
the denominator measures just physical capital. They argue that periodic
arrivals of important new technologies, such as the microprocessor in the
1970s, suddenly render old knowledge and capital obsolete, causing the
stock market to drop. national accounts measures of physical capital miss
this rapid obsolescence so q appears to drop below 1. Unfortunately, this
result clashes with other empirical evidence showing that many estimates
of q over the long-term are well below 1. For example, the average value
of q for the S&P 500 was 0.69 for the period 1900–2012 using the Smithers
& co. dataset. The value of q was greater than unity in 13 years and less
than unity for 100 years so there is no evidence of the upward bias in q
suggested by the suggestions and calculations of Laitner and Stolyarov
(2003). Robertson and Wright (2005) propose a theoretical explanation for
the puzzle of observed values of q less than unity despite the underestimate of intangibles, but the most plausible explanation of the differences
in the estimates of the aggregate value of q lies in the existence of severe
measurement problems associated with this measure of relative market
value. Unfortunately, these data problems cast considerable doubt on its
usefulness to the investment community.
3.3 Data requirements
Estimates of the value of q at the industry level have produced results that
suggest there are serious data issues involved in using the ratio for the US
economy as a whole. Lindenberg and Ross (1981) estimated the value of
q for a number of industries using data covering the period 1960–1976.
The authors found that the value of q ranged from 3.08 for photo equipment to 0.85 for primary metals, but the median value was 1.35 which was
well above 1. This suggests that the industries studied experienced high
degrees of market power with rates of return above the competitive rate having been sustained over long periods of time. Alternatively, it indicates
that the measurement of the replacement value of assets, the denominator
in the ratio, has been seriously underestimated biasing the measurement
of q upwards.
Therefore, measuring q ratios is problematic for individual companies
and industries, but the assumptions required to evaluate a q ratio for the
entire equity market are close to heroic. As a result, the calculation of
aggregate values of q using a bottom-up approach requires a number of
critical assumptions. In contrast, in order to calculate Tobin’s q at the
aggregate level using a top-down approach, it is common to take the ratio
of market capitalization to corporate net worth. This ratio can be used to
assess whether investors are being too bearish (q less than unity) or being
too exuberant (q greater than unity). The data for the United States used
by most analysts is taken from the Federal Reserve’s quarterly Flow of
The most serious problem with estimations of q for investment purposes,
however, whether a top-down or bottom-up approach is taken, remains the
quality of balance sheet data used in the denominator as a proxy for the
replacement value of assets. For example, laitner and Stolyarov (2003)
argue that estimates of q are biased above 1 because there are significant
quantities of unrecorded intangible assets and argue that the stock of
applied knowledge is around 30–50% of the size of the physical capital
stock. Their estimates of the denominator use, as a data series, the sum of
private non-residential fixed capital stock and non-farm inventories, but
Wright (2006) challenges their higher estimates of q based on their data.
He argues that their estimate of both the numerator and the denominator
are both flawed. In particular, their denominator is biased since it includes
some assets that should not be in the denominator such as non-residential
fixed assets belonging to households and non-profit-making institutions
while it omits some tangible assets that should be included such as the value of land. correcting Laitner and Stolyarov’s data series from 1952
onwards for these adjustments to the value of intangible assets meant that
the value of q, rather than being predominately above 1, is predominately
below it. For example, at the market peak of 2000, their estimate of q is
60% higher than the corrected measure.
However, even accepting Stephen Wright’s argument that the estimated
value of q is predominately below 1 rather than above it, does not explain
why it consistently differs from its theoretical value of unity in the long-run. According to Wright (2006) it implies a puzzle which suggests that
either capital stock data is based on overstated assumed capital lives or
incorrect investment price deflators or, alternatively, that markets, at least
in the long-run, have ‘systematically under-valued corporate assets over a
long period’. He concludes that the downward adjustments made to the
estimated value of q do not mean that intangible assets do not exist, rather:
…it just means that their existence or their magnitude cannot be inferred
straightforwardly from the properties of q. It also means that if they do exist in
significant quantities either statisticians or markets must be getting something
systematically wrong. Indeed this conclusion is inescapable, whatever the value
of intangible assets, since even on the basis of tangible assets alone measured q
is systematically below unity.
The ‘Equitisation ratio’ (ER) is defined and measured as the ratio of the
market capitalization of a country’s main stock market to its gross domestic product. Berkshire Hathaway’s chairman, Warren Buffet, allegedly has
been quoted as saying ‘this ratio is probably the best single measure of
where valuations stand at any given
The long-run mean-reverting
Equitisation ratio should be unity
on average, a value to which the
market should return when equities are fairly valued. This is based on a plausible yardstick used by some analysts to value companies that the average profit to turnover ratio is 10%
with stocks trading on an average multiple of 10×. Therefore, use of the
Equitisation ratio is based on a presumed existence of a relatively stable
long-term relationship between the market capitalisation of the listed
stocks in a sector and aggregate industry turnover. Indeed many analysts
use market value to turnover ratios to value new, rapidly growing industries where profits are initially low or negative. At the economy level the
concept of the Equitisation ratio prescribes that these ratios should aggregate out to a relatively stable, mean-reverting ratio of market capitalisation
to value added across all industries, or in other words to GDP. A stable
Equitisation ratio at the aggregate level, therefore, is based on the concept
of a stable ratio of market capitalisation to turnover across industries which
is invariant over time and which automatically adjusts for changes in the
relative importance of sectors of the economy in GDP.
Using market capitalisation and official government statistics data,
Figure 3 shows estimates of the Equitisation ratio for the United States
from 1952 until 2012 based on the ratio of the value of the S&P 500
to GDP. The data shows that far from being stable, the US market’s Equitisation ratio is volatile with a mean value of 0.77 over the period and
a range from a low of 0.43 to a high of 1.79. There has also been a positive trend upwards in the ratio which has become prevalent over the latter
part of the period. The relative valuation line is based on the difference
between the Equitisation ratio in any quarter and its long-term average
value. comparing the quarterly values against the average value over the
period investigated, the ratio reached a peak of 1.79 in the third quarter of
2000, giving a relative valuation 131% over the average level of the ratio.
4.2 Critical issues
Unlike Tobin’s q, there is no rigorous theoretical guidance of what the
long-run value of the Equitisation ratio should be. In a steady state with a
constant rate of return on equity and GDP rising in line with its long-term
trend then, as noted above, there is an argument that the Equitisation ratio
should be unity. Equity markets in this case would then be fairly valued
and a value of the ratio less than 1 would imply a buy signal, and greater
than 1 a sell signal. However, this simple rule will not be valid if the
Equitisation ratio is also affected by the rate of GDP growth. For example,
if more rapid periods of national income growth always produced rising
rates of return on capital then the Equitisation ratio should be greater
than 1 on average, but less than 1 on average if below average growth in
GDP produced lower rates of return. Beyond these generalisations it is
difficult to interpret the value of the Equitisation ratio at any time, and its
use as a practical investment tool is limited by the theoretical possibilities
explored by Ritter (2005) that faster GDP growth can in some cases be
associated with declining rates of return to equities.
Furthermore, of necessity, the use of the Equitisation ratio as a guide
to market value is limited to countries such as the United States and the
United Kingdom where the institutional framework is such that there are
liquid markets with a large number of listed companies whose activities
account for a significant proportion of domestic GDP. In contrast, using the
Equitisation ratio as a signal of the relative value of equities is constrained
in most emerging markets which do not have well-functioning stock markets. Problems caused by inadequate government regulation, a limited
number of listed companies, whose activities in the case of some countries
such as china are linked to the export trade and to other countries’ GDP,
by poor private information gathering and dissemination firms limit the use of this ratio. The impact of these institutional factors on the relative calculated value of Equitisation ratios for a selection of emerging markets based
on research undertaken by Yartey (2008) at the ImF is shown in Figure 4. It
is clear that although there is a relationship between the relative size of the
Equitisation ratio and GDP per capita there is no relationship as the case of
china and Brazil demonstrates with the size of the economy
The credibility of the Equitisation ratio as a yardstick of relative value
has been challenged on empirical grounds by Ritter (2005). Using the
Equitisation ratio to assess the relative value of the stock market implies,
necessarily, that if the ratio is mean reverting,economic growth measured
in terms or increases in real GDP should lead to increases in the value of
the stock market. Indeed although there is ample evidence that in the
short-run anticipated falls in GDP in recessions often have a negative
impact on stock returns, the case for investing in emerging markets is
based soundly on an expected positive relationship between economic
growth and corporate earnings.
4.3 Data requirements
The data requirements to calculate Equitisation ratios are relatively
straightforward since they are measured by dividing a time series of
a stock market index such as the S&P 500 by GDP in current prices.
This would appear to pose no particularly difficult data problems for developed countries where national official statistics offices employ international best practice to estimate GDP. However, in emerging markets
where national statistical offices are understaffed and underfunded, GDP
estimates can be prone to considerable distortions especially in Africa
(see Jerven 2011). However, in many developed and middle income
economies an undervaluation of the size of the service economy and the
exclusion of the economic activities generated by ‘shadow’ economy (see
Schneider 2011) means that the denominator of the Equitisation ratio will
be underestimated biasing upwards the ratio. In emerging markets these
ratios are generally lower than in countries like the US and the UK for
institutional reasons, therefore, this source of measurement error could
A more serious data issue that arises in using estimates of the
Equitisation ratio as an investment signal in developed markets concerns
the time taken between the first GDP quarterly, and hence full year estimate made by the statistical office and the final figure after a series of
revisions has been made. In the US, for example, the Bureau of Economic
Analysis releases monthly estimates of GDP, which can be used to estimate running national income over a 12-month period. However, whereas
the S&P market index can also be calculated monthly as soon as a month
is completed, the Advance Report of GDP is made one month after a
quarter ends. This is followed by a Second Report and a Third Report
which are published two and three months respectively after the quarter
ends. The Third Report is often somewhat different from the Advance
Report, and GDP estimates can be subsequently revised significantly
again in later years. This lack of timeliness and uncertainty in the data
makes the Equitisation ratio less useful as an investment valuation yardstick whatever its other strengths. This disadvantage can be mitigated
by the use of GDP proxy indices derived from survey data which are
produced monthly – for example, the Institute of Supply management
(PmI) indices in the US and indices produced for other major countries
by companies like Markit.
There still remains disagreement about the signals on relative market
valuation given by the different valuation models discussed above. In
practical terms, as shown in Figure 5, which graphs time series for CAPE,
Tobin’s q and the Equitisation ratios, relative to their arithmetical averages, over the period 1952 to 2010.
This period was chosen in order to
remove the upward biases resulting
from the split between the pre- and
post-Second World War periods and
also to ensure that the Equitisation
ratios were based on final GDP estimates. Analysing this later period
avoids the major assumptions required to adjust the data for a time when
national income accounting was less formalised. The pattern exhibited
shows that the three measures move together very closely and, apart
from a limited number of periods, all of them provide the same signal
of an over- or undervalued S&P 500 index. The correlation coefficients
between all of them are over 0.9 and are statistically significant. However Harney and Tower (2003) in an attempt to compare the two valuation
methods popularised in 2000 by Professor Shiller, on the one hand, and
Smithers & co. on the other, found that Tobin’s q was superior to the p/e
ratio at predicting rates of return over a number of alternative horizons. no
long-run comparative tests have been carried out on the predictive ability
of the Equitisation ratio to the author’s knowledge.
All three models suffer problems in terms of their data requirements
particularly in estimating the denominators. The CAPE model wins out
vis-à-vis its rivals in terms of timeliness with monthly updates provided on
Professor Shiller’s website. However, there are still problems with earnings data particularly in areas such as accounting for pension liabilities.
The improvements that have taken place in the quality of reported earnings over time mean, unfortunately, that a market p/e calculated for 1970,
for example, is not strictly comparable with one calculated in 2012. This
means that comparing recent values of CAPE with its long-run average
can provide a distorted picture because of the instability in the measurement of earnings. This is serious since the CAPE valuation yardstick is
grounded in empirics and relative valuations are measured solely in relation to its long-run average value, with the theoretical justification that this
will be mean-reverting and related to the rate of return on equity capital
and the share of profits in GDP.
A number of investment firms produce quarterly estimates of q for the
United States, but calculations of it vary depending on different methods
of valuing replacement assets. Estimates of this variable for the market as
a whole are extremely uncertain and are the subject of a raging economic
debate. The proponents of different methods of estimating q produce
widely different results which are dependent on methodology. These
results potentially bias estimated q downwards or upwards from the theoretical value of unity. This causes a problem in using q as a yardstick of
The Equitisation ratio can also be calculated on a quarterly basis, but
whilst the GDP data used in calculating Equitisation ratios is outdated and
subject to statistical revisions, proxy data is now available for most of the
world’s developed markets. There is no accepted theory of what the long-run value of the Equitisation ratio should be, although yardsticks of value
used by analysts suggest it should be close to unity.
Finally, a word of caution is necessary when using any of these valuation tools for investment purposes. All of them have signalled periods of
significant length when the stock market is relatively overvalued or undervalued, without any corrections in the value of the S&P 500. The length of
these periods, despite the fundamental warnings given by these methods,
can make a considerable difference to the return on any fund focused on
short-run performance. However, given the fact that all three methods
produce similar signals of significant under- or over-valuation in the US
stock market, they are all suitable, either individually or in combination
for use as basic indicators of long-term equity value.
In recognition of this uncertainty, World Economics produces quarterly updates of what all three investment models are
indicating about the relative valuation of the S&P 500. These are published on this web site.
Chung, K.H. & Pruitt, S.W. (1994) A Simple Approximation of Tobin’s q. Financial
Management, 23, pp. 70–74.
Dodd, G. & Graham, B. (1934) Security Analysis. The maple Press co., PA.
Elmendorf, R.G. (2011) Accounting rates of return as proxies for the economic rate of
return: an empirical investigation. Journal of Applied Business Research, 9, 2, pp. 62–75.
Hall, R. (2001) The stock market and capital accumulation. American Economic Review,
91, pp. 1185–1202.
Harney, m. & Tower, E. (2003) Predicting equity returns using Tobin’s q and price/
earnings ratio. Journal of Investing, 12, 3, pp. 58–70.
Hayashi, F. (1982) Tobin’s marginal and average q: a neoclassical interpretation.
Econometrica, 50, pp. 213–224.
Jerven, M. (2011) Counting the bottom billion: measuring the wealth and progress of
African economies. World Economics, 12, 4, pp. 35–52.
Laitner, J. & Stolyarov, D. (2003) Technological change and the stock market.
American Economic Review, 93, 4, pp. 1240–1267.
lang, l.H.P. & Stulz, R.m. (1994) Tobin’s q, corporate diversification, and firm
performance. Journal of Political Economy, 102, 6, pp. 1248–1280.
lindenberg, E. & Ross, S. (1981) Tobin’s q ratio and industrial organisation. Journal of
Business, 54, 1, pp. 1–32.
Ritter, J.R. (2005) Economic growth and equity returns. Pacific-Basin Finance Journal,
13, pp. 489–503.
Robertson, D. & Wright, S. (2005) Tobin’s q, asymmetric information and aggregate
stock market valuations. Birkbeck college Working Paper, University of london.
online at: http://www.ems.bbk.ac.uk/faculty/wright/pdf/asymmetricInfo.pdf (accessed
23 July 2012).
Schneider, F. (2011) The shadow economy and shadow economy labour force:
what do we (not) know? Worldeconomics.com, November. Online at: http://www.worldeconomics.com/Papers/The%20Shadow%20Economy%20and%20Shadow%20
e75a-4624-bb5f-0f10c6d6d4b5.paper (accessed 23 July 2012).
Shiller, R. (2000) Irrational Exuberance. Yale University Press.
Shiller, R. (2012) Shiller’s website with updated dataset at: http://www.econ.yale.
edu/~shiller/data.htm (accessed 23 July 2012).
Smithers, A. & Wright, S. (2000) Valuing Wall Street, Protecting Wealth in Turbulent
Markets. McGraw-Hill Book company.
Tobin, J. (1969) A general equilibrium approach to monetary theory. Journal of Money
Credit and Banking, 1, 1, pp. 15–29.
Wright, S. (2004) measures of stock market value and returns for the US non-financial corporate sector. 1900–2002 Review of Income and Wealth, 50, 4, December, pp. 561–584.
1 - The concept of value investing was refined by Benjamin Graham and David Dodd, professors at Colombia University, and publicised a few years after the Wall Street crash of 1929 (Dodd & Graham 1934).
2 - There are several ways to interpret a p/e ratio. Higher than average p/e ratios for a company imply that
investors are paying relatively more for a unit of income (earnings) so this stock is more expensive than a
company with a lower p/e ratio. Alternatively, if the p/e ratio is expressed for a 12-month period it can be
expressed as a multiple of the number of years it would take for the company’s generated earnings to repay
the purchase price of the stock. This is a simplification that ignores any growth in earnings over the period,
inflation, and also the time value of money.
3 - Barclays Bank published a study annually which looks at the long-term relationship between the returns on
equities and those on gilt edged securities in the United Kingdom. The 57th 2012 edition is available at the
following link: http://www.scribd.com/doc/83012873/Barclays-capital-Equity-Gilt-Study-2012.
4 - In march 2000, Shiller’s book titled Irrational Exuberance was published and named after a quote by Federal
Reserve chairman, Alan Greenspan. one of the book’s arguments was that equity markets were significantly
overvalued when compared with the long-term market p/e ratio. Although the CAPE concept is associated with
Shiller, Benjamin Graham had suggested the idea of averaging p/e ratios over the business cycle as far back as
5 - The long-run chart is distorted for representation purposes by the collapse in the value of the S&P, but not
monthly earnings, in November 2011 as the financial crisis deepened. The running market p/e rose from 27.2 in October 2008 to a high of 123.7 by may 2009 before declining to reach 28.5 by November 2009.
6 - See Shiller (2012): http://www.econ.yale.edu/~shiller/data.htm.
7 - Earnings data is available from the Standard & Poor website: www.spglobal.com.
8 - There is no standard definition but high quality earnings in a firm can be regarded as being repeatable,
controllable and bankable in contrast to low quality earnings which may be boosted temporarily upwards by a
non-repeatable use of an accounting provision.
9 - See Smithers and Wright (2000).
10 - See Wright (2004). The dataset is available at the following location: http://www.ems.bbk.ac.uk/faculty/wright/.
11 - The construction of this data series available to clients is described on the website http://www.tobinsq.com.
12 - In this approach, q is measured by the sum of the market value of common shares outstanding times price
plus the book value of total assets minus common equity all divided by the book value of total assets.
13 - A great deal of the academic work in relation to estimates of q has been undertaken in the industrial
organisation field in order to estimate rates of return. Uses of the ratio in this context can be found in
Lindenberg and Ross (1981); lang and Stulz (1994); and Elmendorf (2011).
14 - This is available from the Federal Reserve Z.1 Statistical Release, section B.102, Balance Sheet and
Reconciliation Tables for non-financial corporate Business. Estimates of q at the aggregate level is the ratio of
line 35 (market value) divided by line 32 (replacement cost).
15 - However, Wright corrected the numerator by making a number of adjustments and recalculated the data
series used by Laitner and Stolyarov, but found that the bias of the calculated q values was relatively trivial
compared with the adjustments made to the denominator.
16 - Business residential capital is also excluded.
17 - Wright (2006), p. 8.
18 - Wright (2006), p. 7.
19 - However, it could be argued that investors’ anticipations which drive the S&P 500 are formed on the basis
of the same imperfect data set. This is an interesting question which requires more theoretical analysis and